Jan 29 08:38:38 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 08:38:38 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:38:38 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:38:39 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 08:38:40 crc kubenswrapper[5031]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:38:40 crc kubenswrapper[5031]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 08:38:40 crc kubenswrapper[5031]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:38:40 crc kubenswrapper[5031]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:38:40 crc kubenswrapper[5031]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 08:38:40 crc kubenswrapper[5031]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.031424 5031 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041678 5031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041718 5031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041730 5031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041740 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041750 5031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041759 5031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041768 5031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041778 5031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041788 5031 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041800 5031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041812 5031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041823 5031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041833 5031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041846 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041855 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041863 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041871 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041880 5031 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041888 5031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041899 5031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041909 5031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041918 5031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041926 5031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041934 5031 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041943 5031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041951 5031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041958 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041966 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041974 5031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041982 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.041989 5031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042007 5031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042016 5031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042023 5031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042031 5031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042039 5031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042049 5031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042059 5031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042067 5031 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042075 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042085 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042092 5031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042100 5031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042109 5031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042118 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042128 5031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042136 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042144 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042152 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042160 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042169 5031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042177 5031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042185 5031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042193 5031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042201 5031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042209 5031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042217 5031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042237 5031 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042245 5031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042253 5031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042261 5031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042268 5031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042276 5031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042284 5031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042292 5031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042300 5031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042308 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042315 5031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042323 5031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042331 5031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.042341 5031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042571 5031 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042591 5031 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042608 5031 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042620 5031 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042632 5031 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042641 5031 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042653 5031 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042664 5031 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042673 5031 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042685 5031 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042695 5031 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042706 5031 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042715 5031 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042724 5031 flags.go:64] FLAG: --cgroup-root="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042732 5031 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042741 5031 flags.go:64] FLAG: --client-ca-file="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042749 5031 flags.go:64] FLAG: --cloud-config="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042758 5031 flags.go:64] FLAG: --cloud-provider="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042767 5031 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042777 5031 flags.go:64] FLAG: --cluster-domain="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042786 5031 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042795 5031 flags.go:64] FLAG: --config-dir="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042803 5031 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042812 5031 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042824 5031 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042833 5031 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042842 5031 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042851 5031 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042860 5031 flags.go:64] FLAG: --contention-profiling="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042868 5031 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042877 5031 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042886 5031 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042894 5031 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042905 5031 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042915 5031 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042924 5031 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042932 5031 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042941 5031 flags.go:64] FLAG: --enable-server="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042950 5031 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042964 5031 flags.go:64] FLAG: --event-burst="100" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042973 5031 flags.go:64] FLAG: --event-qps="50" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042982 5031 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042991 5031 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.042999 5031 flags.go:64] FLAG: --eviction-hard="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043010 5031 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043019 5031 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043027 5031 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043037 5031 flags.go:64] FLAG: --eviction-soft="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043046 5031 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043055 5031 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043065 5031 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043073 5031 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043082 5031 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043091 5031 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043100 5031 flags.go:64] FLAG: --feature-gates="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043111 5031 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043120 5031 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043128 5031 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043137 5031 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043146 5031 flags.go:64] FLAG: --healthz-port="10248" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043156 5031 flags.go:64] FLAG: --help="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043165 5031 flags.go:64] FLAG: --hostname-override="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043174 5031 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043184 5031 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043193 5031 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043202 5031 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043210 5031 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043219 5031 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043228 5031 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043237 5031 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043246 5031 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043255 5031 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043264 5031 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043273 5031 flags.go:64] FLAG: --kube-reserved="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043281 5031 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043290 5031 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043300 5031 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043310 5031 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043320 5031 flags.go:64] FLAG: --lock-file="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043329 5031 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043338 5031 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043347 5031 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043432 5031 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043446 5031 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043454 5031 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043463 5031 flags.go:64] FLAG: --logging-format="text" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043473 5031 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043482 5031 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043491 5031 flags.go:64] FLAG: --manifest-url="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043501 5031 flags.go:64] FLAG: --manifest-url-header="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043513 5031 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043522 5031 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043534 5031 flags.go:64] FLAG: --max-pods="110" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043543 5031 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043553 5031 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043562 5031 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043570 5031 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043579 5031 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043588 5031 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043597 5031 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043618 5031 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043627 5031 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043636 5031 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043646 5031 flags.go:64] FLAG: --pod-cidr="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043655 5031 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043668 5031 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043678 5031 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043687 5031 flags.go:64] FLAG: --pods-per-core="0" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043696 5031 flags.go:64] FLAG: --port="10250" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043705 5031 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043714 5031 flags.go:64] FLAG: --provider-id="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043723 5031 flags.go:64] FLAG: --qos-reserved="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043732 5031 flags.go:64] FLAG: --read-only-port="10255" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043743 5031 flags.go:64] FLAG: --register-node="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043752 5031 flags.go:64] FLAG: --register-schedulable="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043760 5031 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043775 5031 flags.go:64] FLAG: --registry-burst="10" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043785 5031 flags.go:64] FLAG: --registry-qps="5" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043793 5031 flags.go:64] FLAG: --reserved-cpus="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043804 5031 flags.go:64] FLAG: --reserved-memory="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043815 5031 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043824 5031 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043834 5031 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043843 5031 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043851 5031 flags.go:64] FLAG: --runonce="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043860 5031 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043871 5031 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043881 5031 flags.go:64] FLAG: --seccomp-default="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043889 5031 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043898 5031 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043907 5031 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043916 5031 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043925 5031 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043934 5031 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043943 5031 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043952 5031 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043962 5031 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043972 5031 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043982 5031 flags.go:64] FLAG: --system-cgroups="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.043991 5031 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044004 5031 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044013 5031 flags.go:64] FLAG: --tls-cert-file="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044021 5031 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044032 5031 flags.go:64] FLAG: --tls-min-version="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044041 5031 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044050 5031 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044058 5031 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044067 5031 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044076 5031 flags.go:64] FLAG: --v="2" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044088 5031 flags.go:64] FLAG: --version="false" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044099 5031 flags.go:64] FLAG: --vmodule="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044109 5031 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044118 5031 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044316 5031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044328 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044337 5031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044346 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044354 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044402 5031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044414 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044425 5031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044435 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044443 5031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044451 5031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044459 5031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044467 5031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044475 5031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044483 5031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044490 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044499 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044509 5031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044519 5031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044529 5031 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044538 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044546 5031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044554 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044562 5031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044573 5031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044584 5031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044594 5031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044602 5031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044611 5031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044619 5031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044626 5031 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044634 5031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044648 5031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044656 5031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044663 5031 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044671 5031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044679 5031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044689 5031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044701 5031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044709 5031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044717 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044725 5031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044733 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044741 5031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044749 5031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044757 5031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044764 5031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044772 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044780 5031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044788 5031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044796 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044803 5031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044811 5031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044819 5031 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044826 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044834 5031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044842 5031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044849 5031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044857 5031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044865 5031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044872 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044880 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044887 5031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044895 5031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044905 5031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044912 5031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044920 5031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044927 5031 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044935 5031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044943 5031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.044950 5031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.044963 5031 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.056612 5031 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.056672 5031 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056763 5031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056778 5031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056785 5031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056791 5031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056796 5031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056802 5031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056807 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056813 5031 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056821 5031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056827 5031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056832 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056837 5031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056841 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056846 5031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056851 5031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056856 5031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056860 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056864 5031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056869 5031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056874 5031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056879 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056883 5031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056888 5031 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056893 5031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056897 5031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056902 5031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056907 5031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056911 5031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056918 5031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056924 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056929 5031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056935 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056940 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056946 5031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056953 5031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056959 5031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056965 5031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056971 5031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056977 5031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056983 5031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056988 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056992 5031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.056997 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057002 5031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057007 5031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057011 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057016 5031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057021 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057025 5031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057030 5031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057034 5031 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057040 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057045 5031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057051 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057056 5031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057061 5031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057065 5031 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057070 5031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057075 5031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057080 5031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057085 5031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057090 5031 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057094 5031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057098 5031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057103 5031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057108 5031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057114 5031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057120 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057126 5031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057131 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057137 5031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.057146 5031 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057306 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057317 5031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057323 5031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057328 5031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057333 5031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057339 5031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057345 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057350 5031 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057355 5031 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057361 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057424 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057430 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057435 5031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057440 5031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057445 5031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057449 5031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057454 5031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057458 5031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057463 5031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057468 5031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057472 5031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057477 5031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057481 5031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057486 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057517 5031 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057522 5031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057526 5031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057531 5031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057536 5031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057541 5031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057547 5031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057552 5031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057557 5031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057562 5031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057569 5031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057574 5031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057578 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057583 5031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057587 5031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057592 5031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057597 5031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057601 5031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057606 5031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057611 5031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057617 5031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057623 5031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057628 5031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057633 5031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057638 5031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057642 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057647 5031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057653 5031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057658 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057663 5031 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057667 5031 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057672 5031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057678 5031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057684 5031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057690 5031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057696 5031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057701 5031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057709 5031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057714 5031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057719 5031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057723 5031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057728 5031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057732 5031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057737 5031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057742 5031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057747 5031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.057752 5031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.057761 5031 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.058014 5031 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.061971 5031 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.062060 5031 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.063712 5031 server.go:997] "Starting client certificate rotation" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.063760 5031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.064879 5031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 23:58:01.580673317 +0000 UTC Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.064983 5031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.096301 5031 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.099404 5031 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.099522 5031 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.122341 5031 log.go:25] "Validated CRI v1 runtime API" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.160981 5031 log.go:25] "Validated CRI v1 image API" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.163108 5031 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.170432 5031 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-08-33-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.170470 5031 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.190899 5031 manager.go:217] Machine: {Timestamp:2026-01-29 08:38:40.188210518 +0000 UTC m=+0.687798510 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3666a2ab-1f8e-4807-b408-7fd2eb819480 BootID:1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3a:07:5d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3a:07:5d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:57:e9:20 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:5f:0c:d0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:59:9e:ee Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:71:23:84 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:1c:30:62 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:73:a2:7f:d5:d7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:b7:f0:10:78:e8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.191175 5031 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.191332 5031 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.191651 5031 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.191861 5031 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.191904 5031 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.192133 5031 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.192147 5031 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.192747 5031 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.192797 5031 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.193068 5031 state_mem.go:36] "Initialized new in-memory state store" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.193197 5031 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.197567 5031 kubelet.go:418] "Attempting to sync node with API server" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.197595 5031 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.197613 5031 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.197628 5031 kubelet.go:324] "Adding apiserver pod source" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.197642 5031 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.202228 5031 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.203459 5031 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.205268 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.205406 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.205972 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.206146 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.207464 5031 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.209045 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.209071 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.209079 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.209093 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211102 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211152 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211173 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211202 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211226 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211245 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211268 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.211298 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.213129 5031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.214352 5031 server.go:1280] "Started kubelet" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.216029 5031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.216124 5031 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.216128 5031 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.217114 5031 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.217636 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.217681 5031 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.217883 5031 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.217924 5031 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.217923 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:39:41.529690051 +0000 UTC Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.218021 5031 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 08:38:40 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.218114 5031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.224304 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.224468 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.225119 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="200ms" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.225142 5031 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.225192 5031 factory.go:55] Registering systemd factory Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.225212 5031 factory.go:221] Registration of the systemd container factory successfully Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.225796 5031 factory.go:153] Registering CRI-O factory Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.225823 5031 factory.go:221] Registration of the crio container factory successfully Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.225848 5031 factory.go:103] Registering Raw factory Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.225867 5031 manager.go:1196] Started watching for new ooms in manager Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.227217 5031 server.go:460] "Adding debug handlers to kubelet server" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.229128 5031 manager.go:319] Starting recovery of all containers Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.230469 5031 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.153:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f26df8fefb9b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 08:38:40.214309303 +0000 UTC m=+0.713897255,LastTimestamp:2026-01-29 08:38:40.214309303 +0000 UTC m=+0.713897255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.244957 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.245100 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.245137 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.245182 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.245206 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.245244 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.248903 5031 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.248978 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249018 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249044 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249077 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249096 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249125 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249169 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249197 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249216 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249244 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249279 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249308 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249330 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249355 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249397 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249455 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249481 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249501 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249526 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249545 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249574 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249608 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249628 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249653 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249674 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249692 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249714 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249732 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249757 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249780 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249842 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249879 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249900 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249919 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249951 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.249987 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250013 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250035 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250061 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250094 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250123 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250164 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250184 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250205 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250230 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250249 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250281 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250321 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250340 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250391 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250413 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250437 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250455 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250475 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250497 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250515 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250538 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250553 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250576 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250595 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250613 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250633 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250652 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250673 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250688 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250706 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250727 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250749 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250774 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250796 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250817 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250838 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250855 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250876 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250900 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250925 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250952 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250969 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.250987 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251010 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251026 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251048 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251066 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251084 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251106 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251128 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251154 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251171 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251191 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251211 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251229 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251252 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251273 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251291 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251313 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251330 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251351 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251392 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251423 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251448 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251474 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251499 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251519 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251544 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251570 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251591 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251615 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251638 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251656 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251674 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251696 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251713 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251735 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251754 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251771 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251791 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251811 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251832 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251849 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251871 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251901 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251924 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251946 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251969 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.251986 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252006 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252028 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252064 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252094 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252117 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252146 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252172 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252195 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252223 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252252 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252281 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252309 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252326 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252391 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252411 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252432 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252454 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252475 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252503 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252527 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252545 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252566 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252584 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252603 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252622 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252642 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252667 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252690 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252710 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252726 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252743 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252772 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252801 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252820 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252834 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252849 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252869 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252890 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252918 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252938 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252957 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.252985 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253019 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253043 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253064 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253086 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253109 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253128 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253151 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253169 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253185 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253202 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253218 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253233 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253253 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253271 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253295 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253313 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253327 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253633 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253696 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253850 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253932 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253958 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.253998 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254014 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254036 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254052 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254072 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254087 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254101 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254118 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254152 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254170 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254183 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254196 5031 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254278 5031 reconstruct.go:97] "Volume reconstruction finished" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.254299 5031 reconciler.go:26] "Reconciler: start to sync state" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.260402 5031 manager.go:324] Recovery completed Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.273599 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.276147 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.276224 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.276243 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.277548 5031 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.277569 5031 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.277591 5031 state_mem.go:36] "Initialized new in-memory state store" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.277694 5031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.280722 5031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.281190 5031 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.281250 5031 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.281306 5031 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.284449 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.284576 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.300446 5031 policy_none.go:49] "None policy: Start" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.302020 5031 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.302047 5031 state_mem.go:35] "Initializing new in-memory state store" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.323336 5031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.363135 5031 manager.go:334] "Starting Device Plugin manager" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.363255 5031 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.363275 5031 server.go:79] "Starting device plugin registration server" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.364038 5031 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.364063 5031 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.364253 5031 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.364397 5031 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.364418 5031 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.370610 5031 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.381824 5031 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.382047 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.383553 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.383605 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.383620 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.383804 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.384204 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.384287 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.385305 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.385344 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.385402 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.385715 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.385729 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.385820 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.385835 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.386010 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.386055 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387076 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387145 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387171 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387399 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387431 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387445 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387524 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387721 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.387792 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.388875 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.388906 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.388917 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389037 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389222 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389278 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389555 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389608 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389624 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389886 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389921 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.389937 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.390143 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.390180 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.390631 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.390674 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.390702 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.391143 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.391173 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.391186 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.425904 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="400ms" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.459742 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.459905 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460007 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460113 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460190 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460271 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460360 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460477 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460577 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460728 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460823 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.460918 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.461018 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.461112 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.461207 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.464636 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.465450 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.465505 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.465516 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.465537 5031 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.465808 5031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.153:6443: connect: connection refused" node="crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.562938 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563189 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563256 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563328 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563416 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563337 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563578 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563644 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563723 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563609 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563808 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563862 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563903 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.563984 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564046 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564116 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564187 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564076 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564058 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564185 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564233 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564249 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564263 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564277 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564565 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564593 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564615 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564634 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564042 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.564812 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.666135 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.667338 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.667385 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.667397 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.667429 5031 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.667717 5031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.153:6443: connect: connection refused" node="crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.723643 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.731632 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.759489 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.770911 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: I0129 08:38:40.771990 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.782479 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0a8328da37b70a66902f4f6cd14e2f04af5e86cbae801ab07c5b9887dbdcec75 WatchSource:0}: Error finding container 0a8328da37b70a66902f4f6cd14e2f04af5e86cbae801ab07c5b9887dbdcec75: Status 404 returned error can't find the container with id 0a8328da37b70a66902f4f6cd14e2f04af5e86cbae801ab07c5b9887dbdcec75 Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.787031 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1d57e4b374ad4cb6ec77582ab71f65d226a1a97e3568036d49eb15fa2ad87abe WatchSource:0}: Error finding container 1d57e4b374ad4cb6ec77582ab71f65d226a1a97e3568036d49eb15fa2ad87abe: Status 404 returned error can't find the container with id 1d57e4b374ad4cb6ec77582ab71f65d226a1a97e3568036d49eb15fa2ad87abe Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.789624 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6d2371f0635b21c25c5eaff31a5156033b12ca69b5556676ef8192c51f778450 WatchSource:0}: Error finding container 6d2371f0635b21c25c5eaff31a5156033b12ca69b5556676ef8192c51f778450: Status 404 returned error can't find the container with id 6d2371f0635b21c25c5eaff31a5156033b12ca69b5556676ef8192c51f778450 Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.799657 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-156869184ad270b96cb27c2b35e2f9d2f6c9a576225b6d965a2cc8f487a2aec2 WatchSource:0}: Error finding container 156869184ad270b96cb27c2b35e2f9d2f6c9a576225b6d965a2cc8f487a2aec2: Status 404 returned error can't find the container with id 156869184ad270b96cb27c2b35e2f9d2f6c9a576225b6d965a2cc8f487a2aec2 Jan 29 08:38:40 crc kubenswrapper[5031]: W0129 08:38:40.802220 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-624b49614dbbe4dfdcacf7f352c9c11d1feb273b07e17e8f0835ad36c1fdf688 WatchSource:0}: Error finding container 624b49614dbbe4dfdcacf7f352c9c11d1feb273b07e17e8f0835ad36c1fdf688: Status 404 returned error can't find the container with id 624b49614dbbe4dfdcacf7f352c9c11d1feb273b07e17e8f0835ad36c1fdf688 Jan 29 08:38:40 crc kubenswrapper[5031]: E0129 08:38:40.827795 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="800ms" Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.069007 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.217466 5031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.218621 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:26:49.269199734 +0000 UTC Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.233478 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.233540 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.233558 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.233585 5031 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:38:41 crc kubenswrapper[5031]: E0129 08:38:41.234174 5031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.153:6443: connect: connection refused" node="crc" Jan 29 08:38:41 crc kubenswrapper[5031]: W0129 08:38:41.272821 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:41 crc kubenswrapper[5031]: E0129 08:38:41.272910 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.288891 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"156869184ad270b96cb27c2b35e2f9d2f6c9a576225b6d965a2cc8f487a2aec2"} Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.290522 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d2371f0635b21c25c5eaff31a5156033b12ca69b5556676ef8192c51f778450"} Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.291679 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0a8328da37b70a66902f4f6cd14e2f04af5e86cbae801ab07c5b9887dbdcec75"} Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.292734 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1d57e4b374ad4cb6ec77582ab71f65d226a1a97e3568036d49eb15fa2ad87abe"} Jan 29 08:38:41 crc kubenswrapper[5031]: I0129 08:38:41.295097 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"624b49614dbbe4dfdcacf7f352c9c11d1feb273b07e17e8f0835ad36c1fdf688"} Jan 29 08:38:41 crc kubenswrapper[5031]: W0129 08:38:41.410787 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:41 crc kubenswrapper[5031]: E0129 08:38:41.410895 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:41 crc kubenswrapper[5031]: W0129 08:38:41.441419 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:41 crc kubenswrapper[5031]: E0129 08:38:41.441564 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:41 crc kubenswrapper[5031]: W0129 08:38:41.607019 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:41 crc kubenswrapper[5031]: E0129 08:38:41.607124 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:41 crc kubenswrapper[5031]: E0129 08:38:41.629235 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="1.6s" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.034804 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.036198 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.036233 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.036246 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.036271 5031 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:38:42 crc kubenswrapper[5031]: E0129 08:38:42.036765 5031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.153:6443: connect: connection refused" node="crc" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.217934 5031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.218952 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:28:05.785169468 +0000 UTC Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.230571 5031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 08:38:42 crc kubenswrapper[5031]: E0129 08:38:42.231604 5031 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.301153 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0" exitCode=0 Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.301252 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0"} Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.301473 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.302971 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.303027 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.303045 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.304242 5031 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1" exitCode=0 Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.304328 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1"} Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.304416 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.305519 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.305547 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.305559 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.305875 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.306791 5031 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55" exitCode=0 Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.306935 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.307296 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55"} Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.310957 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.311000 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.311019 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.310950 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.311114 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.311126 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.314829 5031 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400" exitCode=0 Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.314950 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.314954 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400"} Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.317236 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.317268 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.317277 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.321180 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366"} Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.321248 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11"} Jan 29 08:38:42 crc kubenswrapper[5031]: I0129 08:38:42.321270 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.217728 5031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.219553 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:59:13.505174082 +0000 UTC Jan 29 08:38:43 crc kubenswrapper[5031]: E0129 08:38:43.230598 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="3.2s" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.325998 5031 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70" exitCode=0 Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.326074 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.326121 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.327096 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.327132 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.327144 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.329351 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.329442 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.329466 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.329383 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.330557 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.330597 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.330608 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.331621 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.331617 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"74e37c385b7c817a87383778a95e8692b36950e82793bc221a7b1eb04083b132"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.334884 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.334966 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.334987 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.338698 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.338854 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.340444 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.340545 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.340631 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.342761 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2848ac3a3bbe5d112225a175740333127ad098c74fb2d72891e5fc56efb99047"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.342853 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.342926 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.342984 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.343043 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d"} Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.342895 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.343952 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.344011 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.344022 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:43 crc kubenswrapper[5031]: W0129 08:38:43.388161 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.153:6443: connect: connection refused Jan 29 08:38:43 crc kubenswrapper[5031]: E0129 08:38:43.388289 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.153:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.637487 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.639117 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.639163 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.639181 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:43 crc kubenswrapper[5031]: I0129 08:38:43.639212 5031 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:38:43 crc kubenswrapper[5031]: E0129 08:38:43.639753 5031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.153:6443: connect: connection refused" node="crc" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.220111 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:44:18.031849967 +0000 UTC Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.347512 5031 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc" exitCode=0 Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.347624 5031 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.347664 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.347723 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348336 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348335 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348377 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc"} Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348495 5031 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348585 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348831 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348891 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348980 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.348991 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.349023 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.349042 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.349052 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.351074 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.351099 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.351109 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.351445 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.351468 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:44 crc kubenswrapper[5031]: I0129 08:38:44.351482 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.221303 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:59:56.410192555 +0000 UTC Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.353933 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7cbf06e68778d628ad3f1e9788fd5561af77781cb1ea44a75bb365c164747a49"} Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.354016 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58e18c4e94401e069ecbb55ee30edae67591da008ce0b9aededca0e164ddd09e"} Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.354038 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ed27407f74e0e42326f42118c3a585ceaca50f845d98fbd925b441588c376916"} Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.354057 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1cc720ab47dc43d10fb8d4518891fa77ad4a77c202f81f7052295cffe3192b42"} Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.354072 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d3b92ae3d176121c1c6dc75aad307d0025b046b1116b47b5fac22db95279e7d7"} Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.354083 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.354964 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.355007 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.355020 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.357517 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.357720 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.358707 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.358757 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:45 crc kubenswrapper[5031]: I0129 08:38:45.358778 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.222473 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:24:23.521410235 +0000 UTC Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.358620 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.360165 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.360261 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.360299 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.483189 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.483586 5031 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.483659 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.485762 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.485844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.485869 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.526878 5031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.680802 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.681026 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.682960 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.683034 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.683058 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.734985 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.840196 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.842238 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.842329 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.842358 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:46 crc kubenswrapper[5031]: I0129 08:38:46.842458 5031 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.223144 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:39:56.939347967 +0000 UTC Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.361340 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.362972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.363037 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.363054 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.389761 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.389931 5031 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.389973 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.391473 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.391565 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.391589 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.719203 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:47 crc kubenswrapper[5031]: I0129 08:38:47.887650 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.223642 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:21:23.357435161 +0000 UTC Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.358844 5031 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.358986 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.365042 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.365091 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.366934 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.367016 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.367044 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.367126 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.367152 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:48 crc kubenswrapper[5031]: I0129 08:38:48.367164 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:49 crc kubenswrapper[5031]: I0129 08:38:49.090575 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:38:49 crc kubenswrapper[5031]: I0129 08:38:49.090864 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:49 crc kubenswrapper[5031]: I0129 08:38:49.092230 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:49 crc kubenswrapper[5031]: I0129 08:38:49.092276 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:49 crc kubenswrapper[5031]: I0129 08:38:49.092285 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:49 crc kubenswrapper[5031]: I0129 08:38:49.224313 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:05:12.587357634 +0000 UTC Jan 29 08:38:50 crc kubenswrapper[5031]: I0129 08:38:50.070452 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 08:38:50 crc kubenswrapper[5031]: I0129 08:38:50.070643 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:50 crc kubenswrapper[5031]: I0129 08:38:50.071804 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:50 crc kubenswrapper[5031]: I0129 08:38:50.071884 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:50 crc kubenswrapper[5031]: I0129 08:38:50.071904 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:50 crc kubenswrapper[5031]: I0129 08:38:50.224919 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:36:13.448205311 +0000 UTC Jan 29 08:38:50 crc kubenswrapper[5031]: E0129 08:38:50.370904 5031 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.149766 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.149942 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.150986 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.151027 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.151042 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.154296 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.225398 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:34:19.034287189 +0000 UTC Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.371272 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.372229 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.372434 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:51 crc kubenswrapper[5031]: I0129 08:38:51.372569 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:52 crc kubenswrapper[5031]: I0129 08:38:52.226336 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:05:10.226910964 +0000 UTC Jan 29 08:38:53 crc kubenswrapper[5031]: I0129 08:38:53.227434 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:30:21.730786777 +0000 UTC Jan 29 08:38:54 crc kubenswrapper[5031]: W0129 08:38:54.028145 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.028246 5031 trace.go:236] Trace[1038158106]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 08:38:44.026) (total time: 10001ms): Jan 29 08:38:54 crc kubenswrapper[5031]: Trace[1038158106]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:38:54.028) Jan 29 08:38:54 crc kubenswrapper[5031]: Trace[1038158106]: [10.001554708s] [10.001554708s] END Jan 29 08:38:54 crc kubenswrapper[5031]: E0129 08:38:54.028275 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 08:38:54 crc kubenswrapper[5031]: W0129 08:38:54.150560 5031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.150641 5031 trace.go:236] Trace[1581216249]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 08:38:44.149) (total time: 10001ms): Jan 29 08:38:54 crc kubenswrapper[5031]: Trace[1581216249]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:38:54.150) Jan 29 08:38:54 crc kubenswrapper[5031]: Trace[1581216249]: [10.001316382s] [10.001316382s] END Jan 29 08:38:54 crc kubenswrapper[5031]: E0129 08:38:54.150663 5031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.218522 5031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.227664 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:19:48.525612326 +0000 UTC Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.228900 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.229038 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.230005 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.230040 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.230075 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.290906 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.379119 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.380919 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2848ac3a3bbe5d112225a175740333127ad098c74fb2d72891e5fc56efb99047" exitCode=255 Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.381005 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2848ac3a3bbe5d112225a175740333127ad098c74fb2d72891e5fc56efb99047"} Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.381103 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.381183 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.382345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.382441 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.382466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.382636 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.382782 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.382878 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.383983 5031 scope.go:117] "RemoveContainer" containerID="2848ac3a3bbe5d112225a175740333127ad098c74fb2d72891e5fc56efb99047" Jan 29 08:38:54 crc kubenswrapper[5031]: E0129 08:38:54.395571 5031 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188f26df8fefb9b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 08:38:40.214309303 +0000 UTC m=+0.713897255,LastTimestamp:2026-01-29 08:38:40.214309303 +0000 UTC m=+0.713897255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.410319 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.579794 5031 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.579856 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.586878 5031 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 08:38:54 crc kubenswrapper[5031]: I0129 08:38:54.586930 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.228202 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:45:34.530941816 +0000 UTC Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.386359 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.388684 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae"} Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.388787 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.389003 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.389612 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.389654 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.389665 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.390950 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.390979 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:55 crc kubenswrapper[5031]: I0129 08:38:55.390990 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:56 crc kubenswrapper[5031]: I0129 08:38:56.229207 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:29:36.60837045 +0000 UTC Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.229432 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:19:11.625919186 +0000 UTC Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.398226 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.398502 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.398643 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.400288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.400341 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.400358 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.404268 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:38:57 crc kubenswrapper[5031]: I0129 08:38:57.440479 5031 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.230475 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:33:38.654641528 +0000 UTC Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.358429 5031 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.358564 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.397066 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.398215 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.398280 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.398299 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:58 crc kubenswrapper[5031]: I0129 08:38:58.928981 5031 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.231485 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:11:18.552907088 +0000 UTC Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.400198 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.401212 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.401252 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.401264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.569904 5031 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 08:38:59 crc kubenswrapper[5031]: E0129 08:38:59.572979 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.574584 5031 trace.go:236] Trace[831580432]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 08:38:48.248) (total time: 11325ms): Jan 29 08:38:59 crc kubenswrapper[5031]: Trace[831580432]: ---"Objects listed" error: 11325ms (08:38:59.574) Jan 29 08:38:59 crc kubenswrapper[5031]: Trace[831580432]: [11.325857092s] [11.325857092s] END Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.574623 5031 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.575217 5031 trace.go:236] Trace[213165024]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 08:38:44.636) (total time: 14938ms): Jan 29 08:38:59 crc kubenswrapper[5031]: Trace[213165024]: ---"Objects listed" error: 14938ms (08:38:59.575) Jan 29 08:38:59 crc kubenswrapper[5031]: Trace[213165024]: [14.938960307s] [14.938960307s] END Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.575249 5031 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 08:38:59 crc kubenswrapper[5031]: E0129 08:38:59.576729 5031 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 08:38:59 crc kubenswrapper[5031]: I0129 08:38:59.596788 5031 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.212328 5031 apiserver.go:52] "Watching apiserver" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.218318 5031 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.218645 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.219208 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.219251 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.219823 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.219899 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.219353 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.219994 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.219318 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.220030 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.220075 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.223817 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.223840 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.223900 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.223991 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.224005 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.224113 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.225012 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.225932 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.226667 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.231640 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:55:02.231050853 +0000 UTC Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.266084 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.279025 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.296277 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.313283 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.318809 5031 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.331884 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.344804 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.363002 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374766 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374817 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374851 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374880 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374907 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374939 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374966 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.374998 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375029 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375058 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375086 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375113 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375141 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375187 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375218 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375245 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375274 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375304 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375332 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375360 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375444 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375479 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375509 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375542 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375597 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375656 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375685 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375713 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375742 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375776 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375809 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375837 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375869 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375865 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375900 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375937 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.375970 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376001 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376031 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376070 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376100 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376129 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376159 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376225 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376256 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376287 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376316 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376314 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376320 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376555 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376633 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376713 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376767 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376815 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376865 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376918 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376967 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377017 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377068 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377122 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377174 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377223 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377274 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377321 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377413 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377467 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377517 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377568 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377625 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377675 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377724 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377776 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377827 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377878 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377923 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377973 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378027 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378073 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378122 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378175 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378277 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378333 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378421 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378474 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378528 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378579 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378633 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378682 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378732 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379351 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379466 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379519 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379575 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379667 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379716 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379765 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379812 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379859 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379910 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379969 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376555 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376771 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.380029 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.376964 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377014 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.377292 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378057 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.380085 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.380143 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.380197 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381642 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381725 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381786 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381850 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381909 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381965 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382026 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382083 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382136 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382195 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382247 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382340 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382450 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382508 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382563 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382625 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382684 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382733 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382787 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382846 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382914 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383013 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383071 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383123 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383180 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383235 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383291 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383343 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383439 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383499 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383555 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383614 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383667 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383718 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383785 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383836 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383894 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383955 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384012 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384063 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384118 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384167 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384205 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384252 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384288 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384325 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384361 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384456 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384503 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384540 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384578 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384615 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384652 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384688 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384722 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384757 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384793 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384829 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384868 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384906 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384944 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384948 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384982 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385020 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385057 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385093 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385144 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385195 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385232 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385269 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385308 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385345 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385435 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385477 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385515 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385552 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385590 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385627 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385663 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385701 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385739 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385776 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385816 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386010 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386077 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386133 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386474 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386524 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386547 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386575 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386603 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386623 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386645 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386899 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386925 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386974 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387009 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387031 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387057 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387076 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387095 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387118 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387139 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387157 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387180 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387199 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387218 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387239 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387259 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387313 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387326 5031 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387337 5031 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387348 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387360 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387418 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387431 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387441 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.392913 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.394501 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.395453 5031 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.399990 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.406344 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.410691 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.410978 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378607 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.378754 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379001 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379094 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379239 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379282 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379630 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.416594 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379822 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379918 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379859 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.379986 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.380686 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381041 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381082 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381338 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381416 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381871 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.381897 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382289 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382303 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382611 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382907 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.382925 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383096 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383688 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.383921 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384048 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384069 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384424 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384471 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384489 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384539 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384816 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384890 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.384909 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385145 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385299 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385519 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385632 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385684 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.385859 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386083 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.386081 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387283 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387310 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387347 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.387565 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.389057 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.389616 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.389903 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.389989 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.390396 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.390953 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.392974 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.392955 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.393175 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.393585 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.393680 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.393909 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.393942 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.393998 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.394810 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.395203 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.395310 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.395578 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.395697 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.396429 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.396472 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.396971 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.397478 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.397587 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.397834 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.397917 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.398401 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.398262 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.398646 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.399442 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.399493 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.400245 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.400923 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.401020 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.401243 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.401283 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.401430 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.401875 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.402263 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.402626 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.402650 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.402875 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.403125 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.403332 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.403536 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.403673 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.403793 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.404156 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.404502 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.404663 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.405040 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.405560 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.405877 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.406002 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.406235 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.407600 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.407908 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.408148 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.408259 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.408487 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.408836 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.409288 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.409791 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.410031 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.410448 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.411092 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.412174 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.412557 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.414990 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.415581 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.415908 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.416178 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.416508 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.416927 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.417202 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.417445 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.417468 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.417480 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.417550 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:00.917522247 +0000 UTC m=+21.417110219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.417661 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.417775 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.418039 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.418235 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.418544 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.418952 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419037 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.417250 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.419239 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419242 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419548 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419554 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419566 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419827 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419865 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.419258 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.419920 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.419964 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.420437 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:00.92035914 +0000 UTC m=+21.419947132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.420890 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:00.920857744 +0000 UTC m=+21.420445736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.421019 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:00.920998118 +0000 UTC m=+21.420586180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421137 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421150 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421216 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421428 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.421583 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:39:00.921554505 +0000 UTC m=+21.421142497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421525 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421630 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421635 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421696 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421818 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.421939 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422018 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422088 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422161 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422215 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422376 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422400 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422467 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422519 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422559 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.422625 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.424979 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.426592 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.430787 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.431143 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.431224 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.431258 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.431282 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.431628 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.432222 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.432340 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.432516 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.432668 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.432799 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.433080 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.433461 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.433824 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.434099 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.434354 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.435575 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.435631 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.435650 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.436250 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.437041 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.437137 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.437156 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.437290 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.440448 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.440520 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.442766 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.446109 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.453898 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.454086 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.457861 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.459903 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.469505 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.476289 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488745 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488786 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488866 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488877 5031 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488886 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488895 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488906 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488916 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488913 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488926 5031 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488975 5031 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.488993 5031 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489011 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489024 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489036 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489049 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489061 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489078 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489089 5031 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489105 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489119 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489132 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489144 5031 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489157 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489046 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489169 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489256 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489268 5031 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489282 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489293 5031 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489318 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489327 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489338 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489347 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489358 5031 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489397 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489407 5031 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489416 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489426 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489434 5031 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489457 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489466 5031 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489475 5031 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489484 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489493 5031 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489502 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489511 5031 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489533 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489544 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489553 5031 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489563 5031 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489573 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489582 5031 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489591 5031 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489615 5031 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489626 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489634 5031 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489643 5031 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489653 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489663 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489689 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489699 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489708 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489717 5031 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489726 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489735 5031 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489744 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489767 5031 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489777 5031 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489786 5031 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489795 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489804 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489812 5031 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489821 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489843 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489852 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489861 5031 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489870 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489878 5031 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489888 5031 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489897 5031 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489919 5031 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489928 5031 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489938 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489947 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489955 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489964 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489973 5031 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.489996 5031 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490005 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490014 5031 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490023 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490032 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490040 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490049 5031 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490073 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490083 5031 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490092 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490102 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490110 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490120 5031 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490129 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490152 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490160 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490170 5031 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490180 5031 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490188 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490197 5031 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490208 5031 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490235 5031 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490244 5031 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490254 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490263 5031 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490271 5031 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490308 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490317 5031 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490325 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490333 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490342 5031 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490351 5031 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490360 5031 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490387 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490397 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490406 5031 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490415 5031 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490424 5031 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490434 5031 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490457 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490467 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490479 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490489 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490499 5031 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490508 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490531 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490542 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490551 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490560 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490568 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490577 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490585 5031 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490610 5031 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490621 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490630 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490638 5031 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490647 5031 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490655 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490664 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490687 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490697 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490706 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490715 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490724 5031 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490734 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490742 5031 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490765 5031 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490774 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490782 5031 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490791 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490800 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490809 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490818 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490825 5031 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490878 5031 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490889 5031 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490897 5031 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490923 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490932 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490941 5031 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490950 5031 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490959 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490966 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490975 5031 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.490999 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491008 5031 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491016 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491025 5031 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491033 5031 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491041 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491050 5031 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491073 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491082 5031 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491091 5031 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491099 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491108 5031 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491118 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491127 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491134 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491158 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491167 5031 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.491176 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.546126 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.556563 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.568704 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:39:00 crc kubenswrapper[5031]: W0129 08:39:00.574621 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a4795c325e904651ff264b64d12c94fc5fdfd7a20ec1b007277b68ace52607d0 WatchSource:0}: Error finding container a4795c325e904651ff264b64d12c94fc5fdfd7a20ec1b007277b68ace52607d0: Status 404 returned error can't find the container with id a4795c325e904651ff264b64d12c94fc5fdfd7a20ec1b007277b68ace52607d0 Jan 29 08:39:00 crc kubenswrapper[5031]: W0129 08:39:00.581914 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-71f439384e9b63a2be0a8bd97267150ad4d425f6c32dd15da8156b9f2a4a9d7e WatchSource:0}: Error finding container 71f439384e9b63a2be0a8bd97267150ad4d425f6c32dd15da8156b9f2a4a9d7e: Status 404 returned error can't find the container with id 71f439384e9b63a2be0a8bd97267150ad4d425f6c32dd15da8156b9f2a4a9d7e Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.995707 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.995797 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.995852 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.995857 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:39:01.995830578 +0000 UTC m=+22.495418530 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.995899 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:01.99588477 +0000 UTC m=+22.495472712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.995918 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.995950 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:00 crc kubenswrapper[5031]: I0129 08:39:00.995969 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996035 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996053 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996064 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996074 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996086 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:01.996077365 +0000 UTC m=+22.495665307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996101 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:01.996093426 +0000 UTC m=+22.495681378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996135 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996169 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996184 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:00 crc kubenswrapper[5031]: E0129 08:39:00.996247 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:01.99622942 +0000 UTC m=+22.495817372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.232094 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:09:44.016111643 +0000 UTC Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.421129 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c5bc26e718e8cc4b8e7e07200edeab35e78a0114996c45e8a75e3e4112d5d605"} Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.422861 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697"} Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.422911 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4"} Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.422925 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"71f439384e9b63a2be0a8bd97267150ad4d425f6c32dd15da8156b9f2a4a9d7e"} Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.424842 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede"} Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.424902 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a4795c325e904651ff264b64d12c94fc5fdfd7a20ec1b007277b68ace52607d0"} Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.426655 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.427156 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.429110 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae" exitCode=255 Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.429153 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae"} Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.429243 5031 scope.go:117] "RemoveContainer" containerID="2848ac3a3bbe5d112225a175740333127ad098c74fb2d72891e5fc56efb99047" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.438088 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.444463 5031 scope.go:117] "RemoveContainer" containerID="fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.444644 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:39:01 crc kubenswrapper[5031]: E0129 08:39:01.444700 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.456874 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.472129 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.489679 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.505080 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.516736 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.533201 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2848ac3a3bbe5d112225a175740333127ad098c74fb2d72891e5fc56efb99047\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"message\\\":\\\"W0129 08:38:43.370728 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 08:38:43.371083 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769675923 cert, and key in /tmp/serving-cert-4051017649/serving-signer.crt, /tmp/serving-cert-4051017649/serving-signer.key\\\\nI0129 08:38:43.714196 1 observer_polling.go:159] Starting file observer\\\\nW0129 08:38:43.717313 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 08:38:43.717445 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:38:43.720833 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4051017649/tls.crt::/tmp/serving-cert-4051017649/tls.key\\\\\\\"\\\\nF0129 08:38:54.227582 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.548132 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.560736 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.573754 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.585567 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.597293 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:01 crc kubenswrapper[5031]: I0129 08:39:01.610515 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:01Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.003075 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.003149 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.003181 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.003208 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.003235 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003290 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003354 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003394 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003411 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003396 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:04.003379199 +0000 UTC m=+24.502967151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003462 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003512 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003449 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003532 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003518 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:04.003485782 +0000 UTC m=+24.503073734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003585 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:39:04.003576145 +0000 UTC m=+24.503164097 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003596 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:04.003591165 +0000 UTC m=+24.503179117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.003607 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:04.003601595 +0000 UTC m=+24.503189547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.112181 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.232560 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:35:10.015827063 +0000 UTC Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.282269 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.282305 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.282391 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.282458 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.282557 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.282647 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.288198 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.288703 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.289468 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.290027 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.290588 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.291045 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.291659 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.292183 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.292878 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.293396 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.293864 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.294507 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.294976 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.295491 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.295966 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.297583 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.298696 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.299310 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.300128 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.300903 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.302500 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.303687 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.304846 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.307182 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.307929 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.309348 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.310103 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.310615 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.311285 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.311781 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.312260 5031 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.312366 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.313750 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.314206 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.314654 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.315804 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.316462 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.316962 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.317592 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.321305 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.321835 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.322772 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.323535 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.324153 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.324664 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.325160 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.325663 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.326603 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.327472 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.328087 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.328685 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.329309 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.330043 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.330653 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.433505 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.435769 5031 scope.go:117] "RemoveContainer" containerID="fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae" Jan 29 08:39:02 crc kubenswrapper[5031]: E0129 08:39:02.435908 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.448666 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:02Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.465251 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:02Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.477345 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:02Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.489648 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:02Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.505196 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:02Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.516145 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:02Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:02 crc kubenswrapper[5031]: I0129 08:39:02.527443 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:02Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.232753 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:56:24.244205213 +0000 UTC Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.440350 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae"} Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.441263 5031 scope.go:117] "RemoveContainer" containerID="fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae" Jan 29 08:39:03 crc kubenswrapper[5031]: E0129 08:39:03.441572 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.462794 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:03Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.481091 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:03Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.494290 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:03Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.515709 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:03Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.531923 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:03Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.549251 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:03Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:03 crc kubenswrapper[5031]: I0129 08:39:03.564182 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:03Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.018965 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.019039 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.019070 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.019100 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.019124 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019211 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019247 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019257 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019212 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019267 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019286 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019312 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019261 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:08.019247325 +0000 UTC m=+28.518835277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019407 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:08.019395409 +0000 UTC m=+28.518983361 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019409 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019422 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:08.019416029 +0000 UTC m=+28.519003981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019434 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:39:08.01942821 +0000 UTC m=+28.519016162 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.019457 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:08.01944544 +0000 UTC m=+28.519033392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.233861 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:11:37.295874299 +0000 UTC Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.282013 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.282105 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:04 crc kubenswrapper[5031]: I0129 08:39:04.282008 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.282249 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.282405 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:04 crc kubenswrapper[5031]: E0129 08:39:04.282498 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.234988 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:11:53.015379135 +0000 UTC Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.361419 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.365450 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.369107 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.377893 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.389291 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.400034 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.411028 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.423049 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.433939 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.447402 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: E0129 08:39:05.451316 5031 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.459610 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.470924 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.481291 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.492844 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.504034 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.514465 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.524717 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.534454 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:05Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.977122 5031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.978735 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.978773 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.978783 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.978855 5031 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.993934 5031 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.994224 5031 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.995202 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.995238 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.995247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.995264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:05 crc kubenswrapper[5031]: I0129 08:39:05.995273 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:05Z","lastTransitionTime":"2026-01-29T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.012541 5031 csr.go:261] certificate signing request csr-94fvr is approved, waiting to be issued Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.020159 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.025025 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.025078 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.025091 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.025109 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.025122 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.035788 5031 csr.go:257] certificate signing request csr-94fvr is issued Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.046544 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.053822 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.053863 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.053881 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.053895 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.053906 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.082893 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.090226 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.090269 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.090279 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.090294 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.090305 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.116243 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.120414 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.120469 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.120481 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.120499 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.120511 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.147644 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.147798 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.150557 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.150609 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.150621 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.150636 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.150650 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.235436 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:47:52.026038904 +0000 UTC Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.252901 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.252931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.252939 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.252953 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.252962 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.281525 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.281569 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.281618 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.281642 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.281763 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:06 crc kubenswrapper[5031]: E0129 08:39:06.281848 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.355168 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.355207 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.355216 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.355229 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.355238 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.457423 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.457460 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.457469 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.457483 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.457492 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.559984 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.560034 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.560044 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.560059 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.560069 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.662635 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.662660 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.662670 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.662684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.662694 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.765071 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.765097 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.765107 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.765119 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.765128 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.769886 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-588df"] Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.770141 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-588df" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.771118 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-l6hrn"] Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.771316 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.772207 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.772981 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.773466 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.773770 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.773885 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.777143 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f7pds"] Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.777256 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.777255 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.777401 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.779218 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mfrbv"] Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.779446 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.781590 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.781591 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.781807 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ghc5v"] Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.782079 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.783013 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.783053 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.783088 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.783201 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.783292 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.783503 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.784847 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.784966 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.785164 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.785255 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.785263 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.785572 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.786301 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.797038 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.810800 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.823527 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.832855 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.843967 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.854774 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.867017 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.867063 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.867077 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.867095 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.867107 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.869196 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.880476 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.892632 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.909518 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.923283 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.936866 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941757 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-cni-bin\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941790 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-systemd-units\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941822 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-netns\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941838 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxqn\" (UniqueName: \"kubernetes.io/projected/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-kube-api-access-spxqn\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941861 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-node-log\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941894 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-k8s-cni-cncf-io\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941938 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.941980 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-config\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942002 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-hostroot\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942020 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/458f6239-f61f-4283-b420-460b3fe9cf09-mcd-auth-proxy-config\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942047 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-netns\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942133 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-env-overrides\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942218 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-system-cni-dir\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942248 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-cni-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942270 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-cni-multus\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942298 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-var-lib-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942325 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942392 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d04461ac-be4b-4e84-bb3f-ccef0e9b649d-hosts-file\") pod \"node-resolver-588df\" (UID: \"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\") " pod="openshift-dns/node-resolver-588df" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942417 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-os-release\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942440 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-etc-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942465 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-netd\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942492 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942515 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-slash\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942536 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cnibin\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942560 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-system-cni-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942583 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-kubelet\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942603 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-bin\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942624 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovn-node-metrics-cert\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942648 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv6rp\" (UniqueName: \"kubernetes.io/projected/d04461ac-be4b-4e84-bb3f-ccef0e9b649d-kube-api-access-bv6rp\") pod \"node-resolver-588df\" (UID: \"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\") " pod="openshift-dns/node-resolver-588df" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942682 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-os-release\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942718 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvz4b\" (UniqueName: \"kubernetes.io/projected/ad8aa59f-a0fb-4a05-ae89-948075794ac8-kube-api-access-vvz4b\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942750 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-kubelet\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942773 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-multus-certs\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942800 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942825 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/458f6239-f61f-4283-b420-460b3fe9cf09-proxy-tls\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942857 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-cnibin\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942885 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-daemon-config\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942904 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-systemd\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942948 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/458f6239-f61f-4283-b420-460b3fe9cf09-rootfs\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942971 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-socket-dir-parent\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.942996 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgxq\" (UniqueName: \"kubernetes.io/projected/458f6239-f61f-4283-b420-460b3fe9cf09-kube-api-access-phgxq\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943029 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-etc-kubernetes\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943071 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943106 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sl9d\" (UniqueName: \"kubernetes.io/projected/2afca9b4-a79c-40db-8c5f-0369e09228b9-kube-api-access-9sl9d\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943144 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-conf-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943168 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-ovn\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943191 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-log-socket\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943214 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-script-lib\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943238 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cni-binary-copy\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.943296 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-cni-binary-copy\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.949686 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.960754 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.969078 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.969121 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.969132 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.969148 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.969160 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:06Z","lastTransitionTime":"2026-01-29T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.973571 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.984294 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:06 crc kubenswrapper[5031]: I0129 08:39:06.995977 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:06Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.005913 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.015810 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.032087 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.036910 5031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 08:34:06 +0000 UTC, rotation deadline is 2026-10-17 05:23:24.108053044 +0000 UTC Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.036954 5031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6260h44m17.07110097s for next certificate rotation Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.042506 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.043962 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-etc-kubernetes\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.043998 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044024 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sl9d\" (UniqueName: \"kubernetes.io/projected/2afca9b4-a79c-40db-8c5f-0369e09228b9-kube-api-access-9sl9d\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044036 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-etc-kubernetes\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044055 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-conf-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044088 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-conf-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044117 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-ovn\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044135 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044088 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-ovn\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044174 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-log-socket\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044198 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-script-lib\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044229 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cni-binary-copy\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044251 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-cni-binary-copy\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044271 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-cni-bin\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044291 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-systemd-units\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044314 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-netns\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044347 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxqn\" (UniqueName: \"kubernetes.io/projected/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-kube-api-access-spxqn\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044347 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-cni-bin\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044959 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-node-log\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044987 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-log-socket\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045066 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-node-log\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045144 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-k8s-cni-cncf-io\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044359 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-systemd-units\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045200 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-k8s-cni-cncf-io\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.044403 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-netns\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045383 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045426 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-config\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045459 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-hostroot\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045491 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/458f6239-f61f-4283-b420-460b3fe9cf09-mcd-auth-proxy-config\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045523 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-netns\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045545 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-env-overrides\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045599 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-system-cni-dir\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045638 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-cni-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045663 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cni-binary-copy\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045668 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-cni-multus\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045707 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-cni-multus\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045728 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-var-lib-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045752 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-netns\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045757 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045818 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d04461ac-be4b-4e84-bb3f-ccef0e9b649d-hosts-file\") pod \"node-resolver-588df\" (UID: \"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\") " pod="openshift-dns/node-resolver-588df" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045847 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-os-release\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045875 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-etc-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045899 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-netd\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045933 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045946 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-var-lib-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045998 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-slash\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046026 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-system-cni-dir\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046077 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cnibin\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046086 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d04461ac-be4b-4e84-bb3f-ccef0e9b649d-hosts-file\") pod \"node-resolver-588df\" (UID: \"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\") " pod="openshift-dns/node-resolver-588df" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046137 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/458f6239-f61f-4283-b420-460b3fe9cf09-mcd-auth-proxy-config\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046148 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-os-release\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046191 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-etc-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046195 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046033 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cnibin\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046311 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-system-cni-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046341 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-kubelet\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046362 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-cni-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046392 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-bin\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.045635 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-cni-binary-copy\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046446 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-hostroot\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046414 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovn-node-metrics-cert\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046492 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-kubelet\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046499 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv6rp\" (UniqueName: \"kubernetes.io/projected/d04461ac-be4b-4e84-bb3f-ccef0e9b649d-kube-api-access-bv6rp\") pod \"node-resolver-588df\" (UID: \"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\") " pod="openshift-dns/node-resolver-588df" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046518 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-netd\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046553 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-os-release\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046569 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-bin\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046580 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046585 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvz4b\" (UniqueName: \"kubernetes.io/projected/ad8aa59f-a0fb-4a05-ae89-948075794ac8-kube-api-access-vvz4b\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046601 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-slash\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046615 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-kubelet\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046634 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-multus-certs\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046649 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad8aa59f-a0fb-4a05-ae89-948075794ac8-os-release\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046658 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046683 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/458f6239-f61f-4283-b420-460b3fe9cf09-proxy-tls\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046707 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-cnibin\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046733 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-run-multus-certs\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046743 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-system-cni-dir\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046741 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-daemon-config\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046778 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-systemd\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046784 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-host-var-lib-kubelet\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046777 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad8aa59f-a0fb-4a05-ae89-948075794ac8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046821 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/458f6239-f61f-4283-b420-460b3fe9cf09-rootfs\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046799 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/458f6239-f61f-4283-b420-460b3fe9cf09-rootfs\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046837 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-systemd\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046852 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-cnibin\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046857 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-socket-dir-parent\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046886 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-openvswitch\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046895 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phgxq\" (UniqueName: \"kubernetes.io/projected/458f6239-f61f-4283-b420-460b3fe9cf09-kube-api-access-phgxq\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.046983 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-socket-dir-parent\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.047242 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-multus-daemon-config\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.048245 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-env-overrides\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.048577 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-script-lib\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.049962 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-config\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.053199 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovn-node-metrics-cert\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.053994 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/458f6239-f61f-4283-b420-460b3fe9cf09-proxy-tls\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.057916 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.066266 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvz4b\" (UniqueName: \"kubernetes.io/projected/ad8aa59f-a0fb-4a05-ae89-948075794ac8-kube-api-access-vvz4b\") pod \"multus-additional-cni-plugins-mfrbv\" (UID: \"ad8aa59f-a0fb-4a05-ae89-948075794ac8\") " pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.066320 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sl9d\" (UniqueName: \"kubernetes.io/projected/2afca9b4-a79c-40db-8c5f-0369e09228b9-kube-api-access-9sl9d\") pod \"ovnkube-node-f7pds\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.066323 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv6rp\" (UniqueName: \"kubernetes.io/projected/d04461ac-be4b-4e84-bb3f-ccef0e9b649d-kube-api-access-bv6rp\") pod \"node-resolver-588df\" (UID: \"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\") " pod="openshift-dns/node-resolver-588df" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.068677 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxqn\" (UniqueName: \"kubernetes.io/projected/e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad-kube-api-access-spxqn\") pod \"multus-ghc5v\" (UID: \"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\") " pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.069842 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgxq\" (UniqueName: \"kubernetes.io/projected/458f6239-f61f-4283-b420-460b3fe9cf09-kube-api-access-phgxq\") pod \"machine-config-daemon-l6hrn\" (UID: \"458f6239-f61f-4283-b420-460b3fe9cf09\") " pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.071228 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.071255 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.071263 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.071275 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.071284 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.088068 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-588df" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.096024 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:39:07 crc kubenswrapper[5031]: W0129 08:39:07.107957 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod458f6239_f61f_4283_b420_460b3fe9cf09.slice/crio-645c2cea5c3ffa1d912f4318a5b1c2ae8cbe8837deb838407a968ede738067f6 WatchSource:0}: Error finding container 645c2cea5c3ffa1d912f4318a5b1c2ae8cbe8837deb838407a968ede738067f6: Status 404 returned error can't find the container with id 645c2cea5c3ffa1d912f4318a5b1c2ae8cbe8837deb838407a968ede738067f6 Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.109338 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.116428 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.122467 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ghc5v" Jan 29 08:39:07 crc kubenswrapper[5031]: W0129 08:39:07.131907 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2afca9b4_a79c_40db_8c5f_0369e09228b9.slice/crio-993f199bf4789aa7315079a22fa9cc8f3fbd728cf19e1e1e20d6ff3b743c5d6d WatchSource:0}: Error finding container 993f199bf4789aa7315079a22fa9cc8f3fbd728cf19e1e1e20d6ff3b743c5d6d: Status 404 returned error can't find the container with id 993f199bf4789aa7315079a22fa9cc8f3fbd728cf19e1e1e20d6ff3b743c5d6d Jan 29 08:39:07 crc kubenswrapper[5031]: W0129 08:39:07.139171 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode728eb2d_5b24_46b2_99f8_2cc7fe1e3aad.slice/crio-bb15b27522c199b0cdf3f86974d8a6383b3b89d6e0cc8374d58896bea1f83580 WatchSource:0}: Error finding container bb15b27522c199b0cdf3f86974d8a6383b3b89d6e0cc8374d58896bea1f83580: Status 404 returned error can't find the container with id bb15b27522c199b0cdf3f86974d8a6383b3b89d6e0cc8374d58896bea1f83580 Jan 29 08:39:07 crc kubenswrapper[5031]: W0129 08:39:07.139726 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad8aa59f_a0fb_4a05_ae89_948075794ac8.slice/crio-bfd04062268ead77a6babaef02193fe0dacecdc2545c1aace0c3701f0c65cf82 WatchSource:0}: Error finding container bfd04062268ead77a6babaef02193fe0dacecdc2545c1aace0c3701f0c65cf82: Status 404 returned error can't find the container with id bfd04062268ead77a6babaef02193fe0dacecdc2545c1aace0c3701f0c65cf82 Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.179201 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.179247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.179259 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.179276 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.179288 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.236161 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:42:41.097940249 +0000 UTC Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.281393 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.281448 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.281457 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.281468 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.281477 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.384326 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.384676 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.384687 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.384703 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.384714 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.450983 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-588df" event={"ID":"d04461ac-be4b-4e84-bb3f-ccef0e9b649d","Type":"ContainerStarted","Data":"c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.451036 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-588df" event={"ID":"d04461ac-be4b-4e84-bb3f-ccef0e9b649d","Type":"ContainerStarted","Data":"796c55def3a88f4d21c589027730dc3a486a92e34bbac564ac5bc34c5926906c"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.452543 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerStarted","Data":"58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.452588 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerStarted","Data":"bb15b27522c199b0cdf3f86974d8a6383b3b89d6e0cc8374d58896bea1f83580"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.453929 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.453975 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.453987 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"645c2cea5c3ffa1d912f4318a5b1c2ae8cbe8837deb838407a968ede738067f6"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.455322 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerStarted","Data":"345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.455382 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerStarted","Data":"bfd04062268ead77a6babaef02193fe0dacecdc2545c1aace0c3701f0c65cf82"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.456330 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0" exitCode=0 Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.456378 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.456403 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"993f199bf4789aa7315079a22fa9cc8f3fbd728cf19e1e1e20d6ff3b743c5d6d"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.462604 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.476858 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.487225 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.487260 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.487269 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.487285 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.487298 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.489237 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.502997 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.513060 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.522485 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.538022 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.551584 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.566284 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.581096 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.589792 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.589830 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.589838 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.589853 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.589862 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.593555 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.609915 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.622823 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.633179 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.642860 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.660505 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.672857 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.684661 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.692822 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.692888 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.692901 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.692929 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.692943 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.699649 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.744599 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.767110 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.780758 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.795118 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.795166 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.795177 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.795196 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.795209 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.796808 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.811157 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.823052 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.838290 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.897764 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.897799 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.897811 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.897827 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:07 crc kubenswrapper[5031]: I0129 08:39:07.897839 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:07Z","lastTransitionTime":"2026-01-29T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.000253 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.000288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.000298 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.000312 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.000321 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.056523 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.056594 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.056620 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.056640 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056687 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:39:16.056664131 +0000 UTC m=+36.556252083 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056713 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056751 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:16.056738734 +0000 UTC m=+36.556326686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.056766 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056788 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056817 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056832 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056834 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056832 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056844 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056915 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056896 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:16.056877198 +0000 UTC m=+36.556465160 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056970 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:16.05695149 +0000 UTC m=+36.556539522 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.056989 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:16.05697591 +0000 UTC m=+36.556563962 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.152034 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.152084 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.152095 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.152118 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.152136 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.236702 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:56:37.574891625 +0000 UTC Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.254592 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.254622 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.254631 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.254644 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.254653 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.281463 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.281486 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.281534 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.281592 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.281721 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:08 crc kubenswrapper[5031]: E0129 08:39:08.281796 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.357100 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.357408 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.357420 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.357440 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.357452 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.459786 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.459834 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.459849 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.459866 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.459878 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.466561 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.466607 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.466617 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.466625 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.466635 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.466642 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.468125 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad8aa59f-a0fb-4a05-ae89-948075794ac8" containerID="345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3" exitCode=0 Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.468173 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerDied","Data":"345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.482184 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.495728 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.510880 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.521919 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.531939 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.542868 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.557490 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.561382 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.561414 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.561424 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.561439 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.561448 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.573583 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.585099 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.596718 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.614945 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.626560 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.638297 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.664056 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.664094 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.664103 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.664119 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.664130 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.766220 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.766272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.766286 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.766302 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.766311 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.869638 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.870102 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.870115 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.870137 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.870148 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.972404 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.972441 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.972453 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.972472 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:08 crc kubenswrapper[5031]: I0129 08:39:08.972484 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:08Z","lastTransitionTime":"2026-01-29T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.075303 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.075343 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.075354 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.075390 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.075404 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.178071 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.178110 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.178118 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.178133 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.178143 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.237827 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:44:21.294826768 +0000 UTC Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.280103 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.280147 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.280156 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.280173 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.280183 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.382290 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.382331 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.382378 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.382395 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.382406 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.476450 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad8aa59f-a0fb-4a05-ae89-948075794ac8" containerID="59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974" exitCode=0 Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.476491 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerDied","Data":"59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.484406 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.484636 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.484745 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.484844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.484962 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.491598 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.505741 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.529009 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.543765 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.556542 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.566091 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.580054 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.587654 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.587686 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.587694 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.587707 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.587715 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.590518 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.600622 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.613324 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.626978 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.639540 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.651773 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.690123 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.690162 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.690171 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.690186 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.690195 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.792359 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.792415 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.792425 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.792441 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.792452 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.894428 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.894462 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.894470 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.894484 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.894495 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.996890 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.996933 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.996945 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.996961 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:09 crc kubenswrapper[5031]: I0129 08:39:09.996971 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:09Z","lastTransitionTime":"2026-01-29T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.063320 5031 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.063692 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-rq2c4"] Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.064141 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.087455 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.087897 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.087922 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.087929 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.098766 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.098789 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.098798 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.098811 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.098827 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.100968 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.114738 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.126390 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.140415 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.154133 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.164462 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.175917 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.187425 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.187620 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w588\" (UniqueName: \"kubernetes.io/projected/dd5b1bdd-3228-49a3-8757-ca54e54430d3-kube-api-access-5w588\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.187668 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5b1bdd-3228-49a3-8757-ca54e54430d3-host\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.187692 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd5b1bdd-3228-49a3-8757-ca54e54430d3-serviceca\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.201247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.201281 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.201289 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.201302 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.201312 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.201330 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.215476 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.227415 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.238084 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 06:27:08.966564234 +0000 UTC Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.239899 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.260341 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.271672 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.281892 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.281978 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:10 crc kubenswrapper[5031]: E0129 08:39:10.282019 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.282060 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:10 crc kubenswrapper[5031]: E0129 08:39:10.282082 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:10 crc kubenswrapper[5031]: E0129 08:39:10.282201 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.288060 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w588\" (UniqueName: \"kubernetes.io/projected/dd5b1bdd-3228-49a3-8757-ca54e54430d3-kube-api-access-5w588\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.288102 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5b1bdd-3228-49a3-8757-ca54e54430d3-host\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.288119 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd5b1bdd-3228-49a3-8757-ca54e54430d3-serviceca\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.288400 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5b1bdd-3228-49a3-8757-ca54e54430d3-host\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.289219 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd5b1bdd-3228-49a3-8757-ca54e54430d3-serviceca\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.293804 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.303590 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.303817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.303902 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.303983 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.304155 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.305980 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w588\" (UniqueName: \"kubernetes.io/projected/dd5b1bdd-3228-49a3-8757-ca54e54430d3-kube-api-access-5w588\") pod \"node-ca-rq2c4\" (UID: \"dd5b1bdd-3228-49a3-8757-ca54e54430d3\") " pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.306114 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.323606 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.335095 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.347121 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.357847 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.371743 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.386594 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.395815 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.397736 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rq2c4" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.407421 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.408020 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.408043 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.408055 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.408070 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.408082 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: W0129 08:39:10.409795 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd5b1bdd_3228_49a3_8757_ca54e54430d3.slice/crio-e51b15138dffd840a51e509d029f563e5384af26a4d2d67e21d955d4adfda06f WatchSource:0}: Error finding container e51b15138dffd840a51e509d029f563e5384af26a4d2d67e21d955d4adfda06f: Status 404 returned error can't find the container with id e51b15138dffd840a51e509d029f563e5384af26a4d2d67e21d955d4adfda06f Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.419074 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.431852 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.447573 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.463810 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.494501 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.495124 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rq2c4" event={"ID":"dd5b1bdd-3228-49a3-8757-ca54e54430d3","Type":"ContainerStarted","Data":"e51b15138dffd840a51e509d029f563e5384af26a4d2d67e21d955d4adfda06f"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.499300 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad8aa59f-a0fb-4a05-ae89-948075794ac8" containerID="c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796" exitCode=0 Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.499450 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerDied","Data":"c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.510842 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.510901 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.510913 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.510926 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.510935 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.515747 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.530627 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.544106 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.555506 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.569666 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.588653 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.601560 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.614065 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.614111 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.614123 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.614140 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.614153 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.614426 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.625450 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.641062 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.653569 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.668242 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.679432 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.689112 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.718926 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.718957 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.718966 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.718981 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.718991 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.821589 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.821623 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.821631 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.821645 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.821654 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.923849 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.923892 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.923901 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.923914 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:10 crc kubenswrapper[5031]: I0129 08:39:10.923922 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:10Z","lastTransitionTime":"2026-01-29T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.027471 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.027515 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.027525 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.027541 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.027550 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.129966 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.130010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.130019 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.130033 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.130043 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.232096 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.232137 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.232152 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.232168 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.232180 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.238212 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:02:43.88357736 +0000 UTC Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.333785 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.333815 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.333824 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.333836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.333845 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.437169 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.437252 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.437268 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.437337 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.437358 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.505712 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rq2c4" event={"ID":"dd5b1bdd-3228-49a3-8757-ca54e54430d3","Type":"ContainerStarted","Data":"f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.509606 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad8aa59f-a0fb-4a05-ae89-948075794ac8" containerID="73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619" exitCode=0 Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.509703 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerDied","Data":"73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.528443 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.540830 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.540873 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.540886 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.540904 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.540919 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.541449 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.559551 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.573471 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.590437 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.606384 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.619564 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.631617 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.641991 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.644957 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.645114 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.645139 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.645154 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.645168 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.655232 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.667762 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.681435 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.713119 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.741114 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.746945 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.747002 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.747018 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.747036 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.747049 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.757271 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.769780 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.780725 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.792766 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.803496 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.821681 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.834163 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.846413 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.848900 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.848942 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.848951 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.848966 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.848974 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.860529 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.874505 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.887701 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.901614 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.912965 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.922032 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:11Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.950812 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.950858 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.950871 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.950888 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:11 crc kubenswrapper[5031]: I0129 08:39:11.950901 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:11Z","lastTransitionTime":"2026-01-29T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.052956 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.053010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.053025 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.053044 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.053056 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.155981 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.156036 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.156054 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.156077 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.156092 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.238901 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:48:48.210231085 +0000 UTC Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.259130 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.259198 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.259218 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.259241 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.259259 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.282114 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:12 crc kubenswrapper[5031]: E0129 08:39:12.282315 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.282398 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.282114 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:12 crc kubenswrapper[5031]: E0129 08:39:12.282547 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:12 crc kubenswrapper[5031]: E0129 08:39:12.282707 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.361506 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.361550 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.361558 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.361573 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.361582 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.464497 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.464555 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.464568 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.464637 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.464656 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.516931 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad8aa59f-a0fb-4a05-ae89-948075794ac8" containerID="88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8" exitCode=0 Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.517016 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerDied","Data":"88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.523725 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.523765 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.523812 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.541798 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.553528 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.554009 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.559713 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.567466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.567512 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.567526 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.567543 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.567556 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.574143 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.588277 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.600249 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.614056 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.624621 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.635515 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.650934 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.661730 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.670385 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.670426 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.670438 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.670460 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.670471 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.674016 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.690945 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.703392 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.710696 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.714122 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.726354 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.738032 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.748738 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.758353 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.766795 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.773050 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.773082 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.773092 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.773108 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.773118 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.777728 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.787036 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.801007 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.813931 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.827169 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.840412 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.857031 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.867880 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.915453 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.915496 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.915507 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.915523 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.915534 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:12Z","lastTransitionTime":"2026-01-29T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:12 crc kubenswrapper[5031]: I0129 08:39:12.918973 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:12Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.018164 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.018208 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.018225 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.018246 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.018261 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.121119 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.121154 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.121162 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.121176 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.121185 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.225618 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.225676 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.225696 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.225719 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.225736 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.239656 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:16:00.543802221 +0000 UTC Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.328502 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.328571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.328590 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.328615 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.328639 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.431345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.431385 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.431393 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.431405 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.431413 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.530763 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad8aa59f-a0fb-4a05-ae89-948075794ac8" containerID="3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c" exitCode=0 Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.530835 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerDied","Data":"3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.533255 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.533302 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.533320 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.533340 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.533356 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.543879 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.577955 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.595066 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.607815 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.623685 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.635526 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.635566 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.635578 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.635593 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.635603 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.635661 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.648864 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.658629 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.667684 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.678720 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.688065 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.699497 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.711068 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.723015 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:13Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.737614 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.737651 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.737660 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.737673 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.737682 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.840255 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.840290 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.840301 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.840317 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.840328 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.942029 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.942065 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.942095 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.942188 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:13 crc kubenswrapper[5031]: I0129 08:39:13.942209 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:13Z","lastTransitionTime":"2026-01-29T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.044511 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.044545 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.044553 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.044565 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.044574 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.146815 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.146854 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.146869 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.146888 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.146903 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.240476 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:39:27.124752618 +0000 UTC Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.249310 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.249359 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.249423 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.249448 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.249465 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.282453 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.282503 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.282570 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:14 crc kubenswrapper[5031]: E0129 08:39:14.282729 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:14 crc kubenswrapper[5031]: E0129 08:39:14.283064 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:14 crc kubenswrapper[5031]: E0129 08:39:14.283439 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.352048 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.352102 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.352117 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.352139 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.352155 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.455583 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.455700 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.456048 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.456467 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.456545 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.536204 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" event={"ID":"ad8aa59f-a0fb-4a05-ae89-948075794ac8","Type":"ContainerStarted","Data":"1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.550520 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.559015 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.559055 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.559072 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.559094 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.559111 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.573543 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.592741 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.607204 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.630050 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.643956 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.655873 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.661069 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.661102 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.661112 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.661127 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.661138 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.670046 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.681232 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.696484 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.708918 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.724964 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.741625 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.755917 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:14Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.763555 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.763582 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.763590 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.763621 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.763631 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.866693 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.867057 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.867188 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.867314 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.867512 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.970502 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.970548 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.970561 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.970577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:14 crc kubenswrapper[5031]: I0129 08:39:14.970589 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:14Z","lastTransitionTime":"2026-01-29T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.072950 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.072987 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.072996 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.073010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.073019 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.175983 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.176025 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.176039 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.176054 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.176063 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.240794 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:55:02.131176161 +0000 UTC Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.277704 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.277758 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.277776 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.277795 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.277806 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.380707 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.380771 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.380788 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.380807 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.380819 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.482817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.483160 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.483171 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.483190 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.483202 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.585207 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.585254 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.585263 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.585277 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.585286 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.688003 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.688052 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.688067 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.688087 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.688099 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.791064 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.791357 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.791664 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.792002 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.792220 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.896017 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.896057 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.896067 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.896081 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.896092 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.998904 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.998969 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.998986 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.999011 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:15 crc kubenswrapper[5031]: I0129 08:39:15.999101 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:15Z","lastTransitionTime":"2026-01-29T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.102753 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.102812 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.102829 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.102856 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.102874 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.144718 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.144972 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:39:32.144901854 +0000 UTC m=+52.644489846 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.145073 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.145144 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.145201 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.145260 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145347 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145454 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145463 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145526 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145530 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:32.145492882 +0000 UTC m=+52.645080874 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145551 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145353 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145574 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:32.145548343 +0000 UTC m=+52.645136345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145602 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145626 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145632 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:32.145612655 +0000 UTC m=+52.645200727 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.145695 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:39:32.145672337 +0000 UTC m=+52.645260329 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.206059 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.206105 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.206114 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.206131 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.206141 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.242394 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:19:44.570744959 +0000 UTC Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.281650 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.281700 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.281776 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.281837 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.281970 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.282118 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.308752 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.308805 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.308817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.308833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.308846 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.411260 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.411301 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.411313 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.411326 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.411335 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.475468 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.475515 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.475526 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.475542 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.475556 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.490968 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.495800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.495871 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.495891 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.495935 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.495977 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.515781 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.520392 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.520434 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.520446 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.520463 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.520476 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.537938 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.545030 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.545434 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.545512 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/0.log" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.545532 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.545829 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.545971 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.548122 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9" exitCode=1 Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.548158 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.548733 5031 scope.go:117] "RemoveContainer" containerID="68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9" Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.559878 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.562861 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.562884 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.562893 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.562909 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.562919 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.566081 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:15Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:39:15.554940 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 08:39:15.554981 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 08:39:15.555030 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:39:15.555055 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 08:39:15.555061 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 08:39:15.555084 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 08:39:15.555099 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:39:15.555111 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 08:39:15.555113 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 08:39:15.555127 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 08:39:15.555126 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:39:15.555246 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 08:39:15.555270 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:39:15.555316 6326 factory.go:656] Stopping watch factory\\\\nI0129 08:39:15.555332 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.580082 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: E0129 08:39:16.580192 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.582018 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.582070 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.582088 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.582109 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.582127 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.582274 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.595429 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.609912 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.620835 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.633907 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.643821 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.652653 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.665549 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.675165 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.684701 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.684826 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.684842 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.684858 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.684870 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.687300 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.702960 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.715740 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.732837 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.787196 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.787235 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.787244 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.787259 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.787269 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.889385 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.889424 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.889435 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.889450 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.889461 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.991669 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.991713 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.991724 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.991736 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:16 crc kubenswrapper[5031]: I0129 08:39:16.991747 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:16Z","lastTransitionTime":"2026-01-29T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.093624 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.093693 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.093717 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.093747 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.093770 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.195874 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.195906 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.195917 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.195931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.195941 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.242818 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:47:40.852028288 +0000 UTC Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.298292 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.298322 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.298332 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.298346 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.298355 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.400680 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.400881 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.400940 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.401026 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.401084 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.503731 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.503783 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.503800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.503816 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.503830 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.556806 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/0.log" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.560085 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.560793 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.573138 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.593046 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.605991 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.607085 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.607141 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.607160 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.607183 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.607203 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.619075 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.632512 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.642636 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.653734 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.663422 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.674991 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.688346 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.703738 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.709749 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.709805 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.709817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.709836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.709847 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.714987 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.749744 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:15Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:39:15.554940 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 08:39:15.554981 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 08:39:15.555030 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:39:15.555055 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 08:39:15.555061 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 08:39:15.555084 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 08:39:15.555099 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:39:15.555111 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 08:39:15.555113 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 08:39:15.555127 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 08:39:15.555126 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:39:15.555246 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 08:39:15.555270 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:39:15.555316 6326 factory.go:656] Stopping watch factory\\\\nI0129 08:39:15.555332 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.778922 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.812355 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.812430 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.812443 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.812461 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.812474 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.915446 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.915489 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.915499 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.915516 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:17 crc kubenswrapper[5031]: I0129 08:39:17.915526 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:17Z","lastTransitionTime":"2026-01-29T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.018477 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.018516 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.018527 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.018558 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.018569 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.121053 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.121097 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.121116 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.121137 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.121152 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.224623 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.224662 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.224673 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.224689 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.224701 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.243504 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:45:08.271646049 +0000 UTC Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.282498 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.282541 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:18 crc kubenswrapper[5031]: E0129 08:39:18.282675 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.283169 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:18 crc kubenswrapper[5031]: E0129 08:39:18.283333 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:18 crc kubenswrapper[5031]: E0129 08:39:18.283530 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.284144 5031 scope.go:117] "RemoveContainer" containerID="fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.327098 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.327583 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.327638 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.327667 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.327706 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.430273 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.430309 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.430319 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.430333 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.430342 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.532524 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.532554 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.532564 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.532577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.532586 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.564413 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/1.log" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.564905 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/0.log" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.567221 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8" exitCode=1 Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.567260 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.567300 5031 scope.go:117] "RemoveContainer" containerID="68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.568082 5031 scope.go:117] "RemoveContainer" containerID="cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8" Jan 29 08:39:18 crc kubenswrapper[5031]: E0129 08:39:18.568286 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.580734 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.591890 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh"] Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.595168 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.598355 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.598416 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.603111 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.613015 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.624929 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.635435 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.635500 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.635515 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.635545 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.635561 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.638078 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.654148 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.669543 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.672557 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9eb5a11b-e97b-490e-947f-c5ee889e3391-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.672637 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9eb5a11b-e97b-490e-947f-c5ee889e3391-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.672709 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9eb5a11b-e97b-490e-947f-c5ee889e3391-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.672764 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/9eb5a11b-e97b-490e-947f-c5ee889e3391-kube-api-access-2fnr8\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.687830 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:15Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:39:15.554940 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 08:39:15.554981 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 08:39:15.555030 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:39:15.555055 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 08:39:15.555061 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 08:39:15.555084 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 08:39:15.555099 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:39:15.555111 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 08:39:15.555113 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 08:39:15.555127 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 08:39:15.555126 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:39:15.555246 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 08:39:15.555270 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:39:15.555316 6326 factory.go:656] Stopping watch factory\\\\nI0129 08:39:15.555332 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.702131 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.719043 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.736821 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.738593 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.738639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.738648 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.738662 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.738673 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.752309 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.770763 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.773985 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/9eb5a11b-e97b-490e-947f-c5ee889e3391-kube-api-access-2fnr8\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.774050 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9eb5a11b-e97b-490e-947f-c5ee889e3391-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.774088 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9eb5a11b-e97b-490e-947f-c5ee889e3391-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.774143 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9eb5a11b-e97b-490e-947f-c5ee889e3391-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.775063 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9eb5a11b-e97b-490e-947f-c5ee889e3391-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.775072 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9eb5a11b-e97b-490e-947f-c5ee889e3391-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.780159 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9eb5a11b-e97b-490e-947f-c5ee889e3391-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.789783 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.793951 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/9eb5a11b-e97b-490e-947f-c5ee889e3391-kube-api-access-2fnr8\") pod \"ovnkube-control-plane-749d76644c-ffnzh\" (UID: \"9eb5a11b-e97b-490e-947f-c5ee889e3391\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.808439 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.824672 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.840586 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.842198 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.842236 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.842247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.842262 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.842271 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.855632 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.869295 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.888834 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68ef61c2d522fab5763f1c265ee2c2dd58fddbebb4228be5c1c41318ff0512c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:15Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:39:15.554940 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 08:39:15.554981 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 08:39:15.555030 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:39:15.555055 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 08:39:15.555061 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 08:39:15.555084 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 08:39:15.555099 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:39:15.555111 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 08:39:15.555113 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 08:39:15.555127 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 08:39:15.555126 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:39:15.555246 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 08:39:15.555270 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:39:15.555316 6326 factory.go:656] Stopping watch factory\\\\nI0129 08:39:15.555332 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.900048 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.909220 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.916052 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: W0129 08:39:18.924016 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eb5a11b_e97b_490e_947f_c5ee889e3391.slice/crio-54ca777adb8c39c63250035ff0adc0953beeb62d45ab440b9014e5af3b52f702 WatchSource:0}: Error finding container 54ca777adb8c39c63250035ff0adc0953beeb62d45ab440b9014e5af3b52f702: Status 404 returned error can't find the container with id 54ca777adb8c39c63250035ff0adc0953beeb62d45ab440b9014e5af3b52f702 Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.933665 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.945097 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.945141 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.945153 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.945170 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.945182 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:18Z","lastTransitionTime":"2026-01-29T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.945849 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.961579 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.976241 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:18 crc kubenswrapper[5031]: I0129 08:39:18.990862 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:18Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.003180 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.021416 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.049569 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.049621 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.049639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.049660 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.049674 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.152042 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.152074 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.152082 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.152122 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.152133 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.244682 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:56:48.12951829 +0000 UTC Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.254733 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.254768 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.254777 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.254790 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.254798 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.357501 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.357557 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.357570 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.357588 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.357602 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.459639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.459684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.459693 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.459710 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.459725 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.561913 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.561972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.561987 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.562005 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.562017 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.573179 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/1.log" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.578592 5031 scope.go:117] "RemoveContainer" containerID="cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8" Jan 29 08:39:19 crc kubenswrapper[5031]: E0129 08:39:19.578908 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.579606 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.582453 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.582721 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.584090 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" event={"ID":"9eb5a11b-e97b-490e-947f-c5ee889e3391","Type":"ContainerStarted","Data":"3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.584129 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" event={"ID":"9eb5a11b-e97b-490e-947f-c5ee889e3391","Type":"ContainerStarted","Data":"c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.584139 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" event={"ID":"9eb5a11b-e97b-490e-947f-c5ee889e3391","Type":"ContainerStarted","Data":"54ca777adb8c39c63250035ff0adc0953beeb62d45ab440b9014e5af3b52f702"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.596683 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.609536 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.622098 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.633566 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.644425 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.659734 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.663691 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.663732 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.663745 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.663760 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.663772 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.669117 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.681488 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.697229 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.713225 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.726182 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.737155 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.747246 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.758821 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.765414 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.765440 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.765449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.765462 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.765470 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.768982 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.779812 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.788958 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.800790 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.808662 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.819640 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.830204 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.840470 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.853141 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.863445 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.868237 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.868317 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.868341 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.868387 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.868406 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.895269 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.912771 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.926425 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.943692 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.958622 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.971532 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.971593 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.971608 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.971630 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.971645 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:19Z","lastTransitionTime":"2026-01-29T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:19 crc kubenswrapper[5031]: I0129 08:39:19.975147 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.074740 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.074792 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.074803 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.074824 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.074835 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.177415 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.177456 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.177468 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.177486 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.177498 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.245515 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:36:24.741910212 +0000 UTC Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.279273 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.279337 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.279359 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.279469 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.279493 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.281560 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.281593 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:20 crc kubenswrapper[5031]: E0129 08:39:20.281847 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:20 crc kubenswrapper[5031]: E0129 08:39:20.281882 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.281644 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:20 crc kubenswrapper[5031]: E0129 08:39:20.281957 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.295091 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.306599 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.320826 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.332347 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.350952 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.363960 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.381584 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.381612 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.381621 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.381642 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.381651 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.381664 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.395880 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.409562 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.426851 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.437023 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.438536 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wnmhx"] Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.438986 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:20 crc kubenswrapper[5031]: E0129 08:39:20.439048 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.454227 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.465928 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.476116 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.483301 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.483337 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.483346 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.483383 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.483396 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.489498 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.489628 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvjg\" (UniqueName: \"kubernetes.io/projected/20a410c7-0476-4e62-9ee1-5fb6998f308f-kube-api-access-tvvjg\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.489716 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.500098 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.509927 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.525348 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.534351 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.545412 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.556913 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.567431 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.579708 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.585229 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.585287 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.585295 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.585308 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.585319 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.590092 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.590145 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvjg\" (UniqueName: \"kubernetes.io/projected/20a410c7-0476-4e62-9ee1-5fb6998f308f-kube-api-access-tvvjg\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:20 crc kubenswrapper[5031]: E0129 08:39:20.590348 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:20 crc kubenswrapper[5031]: E0129 08:39:20.590431 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:39:21.090415223 +0000 UTC m=+41.590003175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.593081 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.603685 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.616479 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvjg\" (UniqueName: \"kubernetes.io/projected/20a410c7-0476-4e62-9ee1-5fb6998f308f-kube-api-access-tvvjg\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.619902 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.632832 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.646564 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.657463 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.670190 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.681575 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:20Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.687896 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.687967 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.687985 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.688011 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.688029 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.790020 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.790076 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.790091 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.790112 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.790132 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.892350 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.892456 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.892470 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.892488 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.892502 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.995121 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.995197 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.995222 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.995266 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:20 crc kubenswrapper[5031]: I0129 08:39:20.995287 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:20Z","lastTransitionTime":"2026-01-29T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.095157 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:21 crc kubenswrapper[5031]: E0129 08:39:21.095299 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:21 crc kubenswrapper[5031]: E0129 08:39:21.095424 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:39:22.095351593 +0000 UTC m=+42.594939545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.097197 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.097218 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.097227 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.097240 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.097249 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.199458 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.199490 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.199499 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.199512 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.199522 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.246031 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:01:40.000710951 +0000 UTC Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.302112 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.302442 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.302450 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.302463 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.302473 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.405977 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.406015 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.406025 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.406041 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.406052 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.508903 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.508979 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.509004 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.509037 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.509064 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.611345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.611437 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.611455 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.611479 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.611497 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.714762 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.714796 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.714808 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.714823 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.714836 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.817503 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.817797 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.817859 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.817956 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.818041 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.920472 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.920514 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.920527 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.920544 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:21 crc kubenswrapper[5031]: I0129 08:39:21.920556 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:21Z","lastTransitionTime":"2026-01-29T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.022999 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.023246 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.023338 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.023471 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.023557 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.106799 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:22 crc kubenswrapper[5031]: E0129 08:39:22.107283 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:22 crc kubenswrapper[5031]: E0129 08:39:22.107580 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:39:24.10755314 +0000 UTC m=+44.607141132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.126459 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.126507 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.126519 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.126532 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.126541 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.229005 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.229079 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.229101 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.229130 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.229148 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.247495 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:21:26.88363727 +0000 UTC Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.282085 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:22 crc kubenswrapper[5031]: E0129 08:39:22.282290 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.282452 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:22 crc kubenswrapper[5031]: E0129 08:39:22.282774 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.282859 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.282900 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:22 crc kubenswrapper[5031]: E0129 08:39:22.282986 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:22 crc kubenswrapper[5031]: E0129 08:39:22.283116 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.332083 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.332140 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.332157 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.332178 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.332193 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.434837 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.434941 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.434954 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.434970 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.434982 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.537861 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.537921 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.537940 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.537980 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.538017 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.640582 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.640624 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.640636 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.640653 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.640665 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.744150 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.744215 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.744230 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.744253 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.744268 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.846771 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.846819 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.846828 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.846844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.846854 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.949637 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.949696 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.949712 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.949734 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:22 crc kubenswrapper[5031]: I0129 08:39:22.949752 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:22Z","lastTransitionTime":"2026-01-29T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.052520 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.052576 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.052594 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.052618 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.052633 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.156025 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.156086 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.156112 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.156142 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.156165 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.248305 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:52:14.558586163 +0000 UTC Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.258502 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.258543 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.258555 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.258571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.258582 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.361237 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.361316 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.361338 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.361408 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.361434 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.464466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.464516 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.464529 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.464546 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.464559 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.567556 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.567622 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.567644 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.567673 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.567696 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.670850 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.671287 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.671481 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.671641 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.671777 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.775577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.775648 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.775667 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.775692 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.775709 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.878033 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.878085 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.878104 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.878122 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.878132 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.981307 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.981340 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.981349 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.981375 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:23 crc kubenswrapper[5031]: I0129 08:39:23.981386 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:23Z","lastTransitionTime":"2026-01-29T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.084319 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.084441 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.084460 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.084489 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.084507 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.126856 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:24 crc kubenswrapper[5031]: E0129 08:39:24.127555 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:24 crc kubenswrapper[5031]: E0129 08:39:24.127795 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:39:28.127759722 +0000 UTC m=+48.627347684 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.187830 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.187879 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.187895 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.187916 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.187930 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.249517 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:36:50.374503346 +0000 UTC Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.281669 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.281704 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.281805 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:24 crc kubenswrapper[5031]: E0129 08:39:24.281810 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.281856 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:24 crc kubenswrapper[5031]: E0129 08:39:24.281994 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:24 crc kubenswrapper[5031]: E0129 08:39:24.282134 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:24 crc kubenswrapper[5031]: E0129 08:39:24.282277 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.290086 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.290405 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.290499 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.290591 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.290680 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.393356 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.393405 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.393413 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.393426 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.393437 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.496946 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.497002 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.497013 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.497035 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.497044 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.599948 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.600278 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.600348 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.600471 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.600587 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.703702 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.703751 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.703766 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.703787 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.703801 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.806562 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.806603 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.806615 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.806632 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.806644 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.908551 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.908587 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.908597 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.908612 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:24 crc kubenswrapper[5031]: I0129 08:39:24.908623 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:24Z","lastTransitionTime":"2026-01-29T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.010271 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.010561 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.010648 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.010748 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.010836 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.113269 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.113306 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.113317 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.113332 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.113343 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.215983 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.216315 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.216443 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.216544 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.216629 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.250718 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:41:15.013490991 +0000 UTC Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.318586 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.318657 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.318666 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.318678 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.318687 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.421931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.421984 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.422000 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.422022 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.422036 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.524717 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.524763 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.524779 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.524800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.524816 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.627215 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.627259 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.627270 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.627294 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.627306 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.731352 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.732584 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.732631 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.732655 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.732667 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.835467 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.835511 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.835523 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.835543 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.835554 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.937791 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.937816 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.937823 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.937836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:25 crc kubenswrapper[5031]: I0129 08:39:25.937845 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:25Z","lastTransitionTime":"2026-01-29T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.040805 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.040870 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.040892 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.040925 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.040949 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.145024 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.145289 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.145524 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.145697 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.145850 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.248158 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.248195 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.248207 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.248222 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.248271 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.252200 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:35:27.301327152 +0000 UTC Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.282415 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.282454 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.282415 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.282544 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.282646 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.282690 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.282792 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.282882 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.350578 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.350615 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.350627 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.350644 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.350656 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.452772 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.452812 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.452825 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.452842 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.452854 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.554700 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.554741 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.554753 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.554769 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.554780 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.631983 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.632042 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.632059 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.632081 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.632098 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.645250 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.648533 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.648559 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.648567 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.648581 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.648589 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.659211 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.662679 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.662705 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.662714 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.662727 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.662738 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.675495 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.678506 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.678545 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.678559 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.678577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.678588 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.697329 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.700827 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.700864 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.700872 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.700886 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.700895 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.711618 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:26 crc kubenswrapper[5031]: E0129 08:39:26.711731 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.713054 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.713083 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.713092 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.713108 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.713120 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.815155 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.815200 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.815211 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.815225 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.815239 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.918192 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.918247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.918264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.918286 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:26 crc kubenswrapper[5031]: I0129 08:39:26.918304 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:26Z","lastTransitionTime":"2026-01-29T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.020996 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.021082 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.021105 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.021130 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.021149 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.123889 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.124234 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.124345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.124474 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.124597 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.227126 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.227433 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.227519 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.227598 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.227656 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.253668 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:49:54.176184978 +0000 UTC Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.330306 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.330351 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.330385 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.330403 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.330413 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.433174 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.433217 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.433231 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.433248 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.433260 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.535956 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.536018 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.536029 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.536048 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.536060 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.638097 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.638138 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.638151 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.638165 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.638174 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.740825 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.740883 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.740903 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.740933 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.740953 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.842691 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.842736 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.842748 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.842767 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.842779 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.945257 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.945388 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.945402 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.945421 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:27 crc kubenswrapper[5031]: I0129 08:39:27.945435 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:27Z","lastTransitionTime":"2026-01-29T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.047406 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.047443 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.047452 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.047466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.047476 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.149974 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.150020 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.150030 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.150052 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.150067 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.189935 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:28 crc kubenswrapper[5031]: E0129 08:39:28.190191 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:28 crc kubenswrapper[5031]: E0129 08:39:28.190292 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:39:36.19027249 +0000 UTC m=+56.689860442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.252828 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.252864 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.252873 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.252890 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.252899 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.254161 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:17:56.907493382 +0000 UTC Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.281461 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.281528 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:28 crc kubenswrapper[5031]: E0129 08:39:28.281630 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.281708 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.281744 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:28 crc kubenswrapper[5031]: E0129 08:39:28.282780 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:28 crc kubenswrapper[5031]: E0129 08:39:28.284409 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:28 crc kubenswrapper[5031]: E0129 08:39:28.284761 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.355731 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.355789 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.355799 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.355831 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.355861 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.458753 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.458829 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.458866 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.458886 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.458897 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.561629 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.561691 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.561706 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.561726 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.561740 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.664673 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.664720 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.664731 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.664746 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.664757 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.766769 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.766813 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.766840 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.766859 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.766872 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.869186 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.869244 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.869261 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.869287 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.869312 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.971718 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.971771 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.971780 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.971795 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:28 crc kubenswrapper[5031]: I0129 08:39:28.971803 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:28Z","lastTransitionTime":"2026-01-29T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.074587 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.074627 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.074641 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.074658 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.074666 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.093977 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.101488 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.106628 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.119821 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.132589 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.141432 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.155545 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.165548 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.176775 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.176812 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.176822 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.176838 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.176849 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.177749 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.186726 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.197705 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.208881 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.231700 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.244610 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.255318 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:04:25.868008049 +0000 UTC Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.255776 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.268256 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.279249 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.279390 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.279470 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.279541 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.279600 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.281763 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.294272 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.383207 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.383258 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.383268 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.383290 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.383303 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.486315 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.486404 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.486415 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.486434 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.486446 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.589128 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.589173 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.589184 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.589226 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.589241 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.693628 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.693681 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.693693 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.693713 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.693725 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.796844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.796892 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.796903 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.796923 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.796936 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.899985 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.900082 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.900096 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.900117 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:29 crc kubenswrapper[5031]: I0129 08:39:29.900128 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:29Z","lastTransitionTime":"2026-01-29T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.002802 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.002866 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.002880 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.002900 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.002912 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.106650 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.106698 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.106707 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.106720 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.106732 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.209261 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.209303 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.209314 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.209330 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.209342 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.256271 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:05:52.153712043 +0000 UTC Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.281790 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:30 crc kubenswrapper[5031]: E0129 08:39:30.281909 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.282013 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.282099 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.282149 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:30 crc kubenswrapper[5031]: E0129 08:39:30.282294 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:30 crc kubenswrapper[5031]: E0129 08:39:30.282891 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:30 crc kubenswrapper[5031]: E0129 08:39:30.282977 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.300888 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.312740 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.312774 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.312785 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.312800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.312810 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.317605 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.332569 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.351407 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.366114 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.381292 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.393188 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.406323 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.416605 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.416801 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.416837 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.416847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.416866 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.416878 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.430784 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.446836 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.459810 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.470261 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.482247 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.495976 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.520345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.520432 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.520449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.520475 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.520487 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.520782 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.535051 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.623129 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.623172 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.623183 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.623201 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.623214 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.727103 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.727145 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.727155 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.727172 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.727182 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.831134 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.831185 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.831195 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.831212 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.831221 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.934090 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.934138 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.934150 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.934166 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:30 crc kubenswrapper[5031]: I0129 08:39:30.934177 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:30Z","lastTransitionTime":"2026-01-29T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.036844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.036892 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.036903 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.036922 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.036933 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.139318 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.139353 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.139378 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.139391 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.139401 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.242031 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.242095 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.242108 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.242123 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.242133 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.256813 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:44:23.663167378 +0000 UTC Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.344718 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.344763 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.344774 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.344792 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.344804 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.447145 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.447181 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.447191 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.447207 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.447218 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.549544 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.549586 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.549598 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.549614 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.549629 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.652215 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.652264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.652280 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.652302 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.652318 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.755437 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.755484 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.755496 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.755513 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.755525 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.857981 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.858036 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.858049 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.858070 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.858085 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.960245 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.960295 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.960307 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.960327 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:31 crc kubenswrapper[5031]: I0129 08:39:31.960339 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:31Z","lastTransitionTime":"2026-01-29T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.063820 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.063880 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.063893 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.063910 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.063922 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.166585 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.166653 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.166663 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.166684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.166696 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.233716 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.233893 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.233977 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:40:04.233928528 +0000 UTC m=+84.733516480 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234036 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.234049 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234102 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:40:04.234083073 +0000 UTC m=+84.733671115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.234176 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.234241 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234343 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234350 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234431 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:40:04.234419403 +0000 UTC m=+84.734007545 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234395 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234455 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234477 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234433 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234514 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234538 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:40:04.234518576 +0000 UTC m=+84.734106568 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.234568 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:40:04.234553527 +0000 UTC m=+84.734141669 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.257471 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:50:23.430043702 +0000 UTC Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.270114 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.270170 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.270186 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.270202 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.270216 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.281527 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.281527 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.281567 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.281712 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.281751 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.282056 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.282165 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:32 crc kubenswrapper[5031]: E0129 08:39:32.282252 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.372566 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.372649 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.372665 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.372689 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.372707 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.476761 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.476833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.476843 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.476865 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.476880 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.580755 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.580808 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.580826 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.580847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.580862 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.683952 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.684010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.684021 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.684044 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.684059 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.786933 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.786981 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.786992 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.787012 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.787027 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.889412 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.889463 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.889475 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.889491 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.889502 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.992517 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.992559 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.992569 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.992583 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:32 crc kubenswrapper[5031]: I0129 08:39:32.992591 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:32Z","lastTransitionTime":"2026-01-29T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.095164 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.095248 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.095266 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.095293 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.095313 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.197898 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.197941 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.197951 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.197969 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.197978 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.258170 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:16:24.927065241 +0000 UTC Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.301401 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.301466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.301481 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.301506 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.301521 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.404746 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.404802 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.404815 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.404833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.404844 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.507747 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.507789 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.507804 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.507820 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.507833 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.610422 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.610462 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.610472 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.610485 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.610495 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.712791 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.712830 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.712838 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.712853 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.712862 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.815545 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.815592 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.815603 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.815620 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.815633 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.918703 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.918768 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.918790 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.918816 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:33 crc kubenswrapper[5031]: I0129 08:39:33.918831 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:33Z","lastTransitionTime":"2026-01-29T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.021448 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.021509 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.021526 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.021552 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.021590 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.124182 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.124216 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.124227 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.124239 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.124248 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.227246 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.227275 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.227283 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.227296 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.227304 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.258907 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:39:30.101067239 +0000 UTC Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.282263 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.282332 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.282286 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:34 crc kubenswrapper[5031]: E0129 08:39:34.282451 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:34 crc kubenswrapper[5031]: E0129 08:39:34.282507 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:34 crc kubenswrapper[5031]: E0129 08:39:34.282614 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.282655 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:34 crc kubenswrapper[5031]: E0129 08:39:34.283676 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.284071 5031 scope.go:117] "RemoveContainer" containerID="cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.330421 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.330533 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.330544 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.330590 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.330605 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.433844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.433882 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.433892 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.433912 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.433925 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.537108 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.537157 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.537166 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.537180 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.537189 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.645721 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.645780 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.645793 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.645811 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.645825 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.646944 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/1.log" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.651415 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.652033 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.667839 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.684119 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.700744 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.714507 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.740232 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.748701 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.748768 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.748777 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.748793 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.748802 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.761888 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.783631 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.800753 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.811038 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.824135 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.839641 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.851301 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.851342 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.851353 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.851383 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.851394 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.855959 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.866824 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.881548 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.897249 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.917577 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.930422 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.954285 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.954334 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.954345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.954384 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:34 crc kubenswrapper[5031]: I0129 08:39:34.954408 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:34Z","lastTransitionTime":"2026-01-29T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.056876 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.056950 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.056962 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.056980 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.056991 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.159877 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.159915 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.159931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.159948 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.159960 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.259301 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:10:57.437252168 +0000 UTC Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.262565 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.262599 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.262608 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.262621 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.262631 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.365700 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.366010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.366033 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.366053 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.366064 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.467964 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.468010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.468025 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.468045 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.468060 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.570193 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.570233 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.570242 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.570256 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.570266 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.657563 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/2.log" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.658693 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/1.log" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.661448 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b" exitCode=1 Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.661493 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.661552 5031 scope.go:117] "RemoveContainer" containerID="cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.662540 5031 scope.go:117] "RemoveContainer" containerID="f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b" Jan 29 08:39:35 crc kubenswrapper[5031]: E0129 08:39:35.662809 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.672148 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.672195 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.672203 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.672216 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.672225 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.676937 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.690047 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.701715 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.720152 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.731469 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.742891 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.753540 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.766319 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.774476 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.774518 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.774530 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.774545 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.774557 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.778828 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.793147 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.805264 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.817010 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.827390 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.838078 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.848234 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.864614 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb613ddafa7f814c034f8907942e48984a43fb3f62fc1a9c3e5d5bbb8f418b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:17Z\\\",\\\"message\\\":\\\"try.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0129 08:39:17.576573 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:17Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:17.577487 6491 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 08:39:17.577489 6491 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 08:39:17.577497 6491 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.874712 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.876267 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.876307 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.876317 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.876333 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.876345 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.979034 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.979074 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.979083 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.979097 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:35 crc kubenswrapper[5031]: I0129 08:39:35.979106 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:35Z","lastTransitionTime":"2026-01-29T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.081333 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.081401 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.081417 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.081476 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.081501 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.184250 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.184284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.184293 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.184306 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.184315 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.260487 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:27:50.753728196 +0000 UTC Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.280279 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.280623 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.280824 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:39:52.28078816 +0000 UTC m=+72.780376132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.281469 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.281471 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.281759 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.281625 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.281658 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.282008 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.281476 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.282164 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.287224 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.287340 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.287428 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.287497 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.287555 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.390414 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.390459 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.390473 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.390491 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.390504 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.492574 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.492915 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.492981 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.493089 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.493179 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.596066 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.596124 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.596143 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.596161 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.596199 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.666307 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/2.log" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.670229 5031 scope.go:117] "RemoveContainer" containerID="f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.670446 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.681660 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.693207 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.698465 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.698507 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.698518 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.698534 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.698545 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.703874 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.716546 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.726725 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.730448 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.730477 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.730489 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.730504 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.730515 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.741233 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.743098 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.746194 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.746221 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.746232 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.746245 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.746255 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.753428 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.760359 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.763500 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.763560 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.763571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.763583 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.763591 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.767343 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.774229 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.776785 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.777459 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.777481 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.777490 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.777503 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.777512 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.788308 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.788719 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.791862 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.791895 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.791906 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.791921 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.791932 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.800983 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.803667 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: E0129 08:39:36.803804 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.805195 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.805219 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.805228 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.805239 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.805249 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.818988 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.828735 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.840257 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.850896 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.862320 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.878638 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.907559 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.907596 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.907607 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.907620 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:36 crc kubenswrapper[5031]: I0129 08:39:36.907629 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:36Z","lastTransitionTime":"2026-01-29T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.009704 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.009739 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.009747 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.009761 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.009770 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.111542 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.111588 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.111597 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.111610 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.111618 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.214382 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.214414 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.214422 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.214435 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.214447 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.261092 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:59:13.77857388 +0000 UTC Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.316289 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.316316 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.316325 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.316337 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.316347 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.419449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.419506 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.419519 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.419541 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.419554 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.522144 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.522252 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.522269 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.522288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.522300 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.625000 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.625028 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.625036 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.625049 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.625058 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.728211 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.728273 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.728292 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.728320 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.728343 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.832796 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.832862 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.832877 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.832902 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.832918 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.893258 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.909682 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:37Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.927449 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:37Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.938874 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.938955 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.938976 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.939003 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.939021 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:37Z","lastTransitionTime":"2026-01-29T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.951952 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:37Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.964937 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:37Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.975009 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:37Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:37 crc kubenswrapper[5031]: I0129 08:39:37.986268 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:37Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.003676 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.015766 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.029717 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.041990 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.042034 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.042045 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.042065 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.042079 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.045982 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.058144 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.074736 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.087486 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.100069 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.119718 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.134246 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.144512 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.144556 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.144571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.144592 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.144607 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.148661 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.249187 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.249286 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.249299 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.249334 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.249348 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.261783 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:55:53.492936793 +0000 UTC Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.282291 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.282587 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.282430 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.282430 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:38 crc kubenswrapper[5031]: E0129 08:39:38.282925 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:38 crc kubenswrapper[5031]: E0129 08:39:38.283276 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:38 crc kubenswrapper[5031]: E0129 08:39:38.283560 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:38 crc kubenswrapper[5031]: E0129 08:39:38.283746 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.352908 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.352967 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.352976 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.353000 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.353010 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.457232 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.457283 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.457297 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.457313 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.457324 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.559736 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.559789 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.559802 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.559817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.559828 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.662919 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.662972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.662980 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.662992 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.663001 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.764955 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.765014 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.765025 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.765041 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.765052 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.867238 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.867314 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.867327 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.867344 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.867355 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.969937 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.969964 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.969972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.969984 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:38 crc kubenswrapper[5031]: I0129 08:39:38.969992 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:38Z","lastTransitionTime":"2026-01-29T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.073679 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.073725 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.073751 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.073766 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.073776 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.177563 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.177629 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.177648 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.177675 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.177697 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.263045 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:11:03.110602417 +0000 UTC Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.281338 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.281416 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.281432 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.281454 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.281466 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.384977 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.385072 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.385096 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.385132 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.385160 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.488531 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.488606 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.488619 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.488638 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.488653 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.592275 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.592329 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.592341 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.592359 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.592402 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.695473 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.695532 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.695546 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.695567 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.695584 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.800259 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.800712 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.800823 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.800930 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.801001 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.904938 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.905272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.905343 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.905464 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:39 crc kubenswrapper[5031]: I0129 08:39:39.905535 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:39Z","lastTransitionTime":"2026-01-29T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.009625 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.009656 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.009666 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.009682 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.009693 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.112905 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.112952 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.112962 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.112980 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.112991 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.215040 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.215106 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.215122 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.215148 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.215164 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.264161 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:29:07.928929328 +0000 UTC Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.281547 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.281641 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.281665 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.281712 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:40 crc kubenswrapper[5031]: E0129 08:39:40.282041 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:40 crc kubenswrapper[5031]: E0129 08:39:40.282138 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:40 crc kubenswrapper[5031]: E0129 08:39:40.282278 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:40 crc kubenswrapper[5031]: E0129 08:39:40.282537 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.303970 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.316992 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.318284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.318324 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.318340 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.318358 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.318387 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.328436 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.342033 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.353346 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.362916 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.374340 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.386325 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.395026 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.408398 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.420529 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.420566 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.420577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.420595 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.420608 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.422483 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.433753 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.443703 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.453720 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.465023 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.478343 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.497381 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.522877 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.522922 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.522934 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.522950 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.523075 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.625634 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.625683 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.625693 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.625707 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.625716 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.729401 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.729433 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.729444 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.729460 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.729471 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.831344 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.831673 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.831682 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.831699 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.831708 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.935459 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.935511 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.935520 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.935540 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:40 crc kubenswrapper[5031]: I0129 08:39:40.935552 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:40Z","lastTransitionTime":"2026-01-29T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.038644 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.038693 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.038705 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.038724 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.038737 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.140776 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.140819 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.140829 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.140846 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.140858 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.243250 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.243288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.243299 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.243316 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.243327 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.345837 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.345873 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.345883 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.345899 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.345911 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.448828 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.448866 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.448880 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.448894 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.448907 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.551223 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.551250 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.551260 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.551272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.551281 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.653465 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.653492 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.653500 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.653511 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.653520 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.755884 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.755918 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.755927 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.755942 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.755953 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.800237 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:26:49.407643009 +0000 UTC Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.800705 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.800732 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:41 crc kubenswrapper[5031]: E0129 08:39:41.800790 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.800706 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:41 crc kubenswrapper[5031]: E0129 08:39:41.800877 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.800904 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:41 crc kubenswrapper[5031]: E0129 08:39:41.800944 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:41 crc kubenswrapper[5031]: E0129 08:39:41.800981 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.857901 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.857934 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.857943 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.857958 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.857969 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.960264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.960307 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.960321 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.960363 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:41 crc kubenswrapper[5031]: I0129 08:39:41.960424 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:41Z","lastTransitionTime":"2026-01-29T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.063354 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.063424 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.063433 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.063446 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.063463 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.166727 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.166773 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.166781 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.166797 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.166807 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.269223 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.269275 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.269284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.269298 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.269308 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.371163 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.371197 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.371206 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.371219 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.371230 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.473845 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.473926 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.473938 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.473953 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.473964 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.576229 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.576266 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.576273 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.576289 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.576298 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.678915 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.678961 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.678972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.678988 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.678997 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.780915 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.780956 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.780968 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.781088 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.781098 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.801100 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:08:38.382665798 +0000 UTC Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.884471 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.884543 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.884553 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.884566 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.884578 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.986684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.986713 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.986720 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.986734 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:42 crc kubenswrapper[5031]: I0129 08:39:42.986744 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:42Z","lastTransitionTime":"2026-01-29T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.090202 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.090253 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.090274 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.090300 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.090321 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.192542 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.192570 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.192578 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.192591 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.192600 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.282438 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.282506 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.282443 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:43 crc kubenswrapper[5031]: E0129 08:39:43.282569 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.282458 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:43 crc kubenswrapper[5031]: E0129 08:39:43.282656 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:43 crc kubenswrapper[5031]: E0129 08:39:43.282750 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:43 crc kubenswrapper[5031]: E0129 08:39:43.282806 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.295137 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.295176 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.295185 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.295198 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.295206 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.398534 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.398577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.398586 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.398599 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.398609 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.500782 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.500817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.500827 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.500841 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.500855 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.603590 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.603625 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.603638 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.603653 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.603664 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.705406 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.705498 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.705507 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.705521 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.705530 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.802085 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:51:13.488418883 +0000 UTC Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.807464 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.807496 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.807506 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.807520 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.807531 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.909274 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.909449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.909458 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.909471 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:43 crc kubenswrapper[5031]: I0129 08:39:43.909480 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:43Z","lastTransitionTime":"2026-01-29T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.011513 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.011543 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.011561 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.011575 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.011584 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.113595 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.113659 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.113671 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.113735 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.113751 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.216068 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.216104 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.216114 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.216130 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.216140 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.318826 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.318865 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.318874 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.318890 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.318900 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.420741 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.420772 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.420782 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.420795 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.420804 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.523264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.523298 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.523330 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.523345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.523356 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.626678 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.626744 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.626762 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.626789 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.626806 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.729402 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.729441 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.729449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.729464 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.729473 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.803085 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:27:19.491633725 +0000 UTC Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.831417 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.831446 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.831454 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.831466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.831475 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.933710 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.933738 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.933747 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.933762 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:44 crc kubenswrapper[5031]: I0129 08:39:44.933771 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:44Z","lastTransitionTime":"2026-01-29T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.036286 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.036321 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.036332 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.036345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.036357 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.138628 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.138680 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.138690 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.138705 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.138716 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.240923 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.240966 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.240977 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.240992 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.241002 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.282219 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.282390 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.282248 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:45 crc kubenswrapper[5031]: E0129 08:39:45.282533 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.282219 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:45 crc kubenswrapper[5031]: E0129 08:39:45.282400 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:45 crc kubenswrapper[5031]: E0129 08:39:45.282616 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:45 crc kubenswrapper[5031]: E0129 08:39:45.282691 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.342976 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.343014 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.343023 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.343036 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.343046 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.445868 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.445918 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.445928 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.445944 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.445956 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.548864 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.548914 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.548925 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.548943 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.548956 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.651990 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.652037 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.652048 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.652065 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.652077 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.754271 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.754322 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.754334 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.754350 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.754360 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.803755 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:03:03.794235714 +0000 UTC Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.857145 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.857176 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.857184 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.857197 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.857206 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.959451 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.959483 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.959494 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.959514 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:45 crc kubenswrapper[5031]: I0129 08:39:45.959524 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:45Z","lastTransitionTime":"2026-01-29T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.061418 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.061469 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.061479 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.061496 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.061505 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.163453 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.163495 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.163510 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.163528 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.163539 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.265345 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.265407 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.265418 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.265431 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.265440 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.368059 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.368100 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.368108 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.368121 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.368130 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.470150 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.470204 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.470215 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.470230 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.470241 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.572073 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.572116 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.572127 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.572142 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.572152 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.674825 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.674869 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.674879 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.674899 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.674910 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.777958 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.778014 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.778035 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.778063 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.778087 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.804520 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:22:07.46958582 +0000 UTC Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.880462 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.880500 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.880508 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.880522 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.880532 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.982571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.982608 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.982621 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.982639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:46 crc kubenswrapper[5031]: I0129 08:39:46.982651 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:46Z","lastTransitionTime":"2026-01-29T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.084589 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.084817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.084912 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.085007 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.085087 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.148861 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.148903 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.148915 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.148933 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.148945 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.160936 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:47Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.164419 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.164548 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.164646 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.164705 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.164762 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.175641 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:47Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.179880 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.179919 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.179928 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.179944 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.179955 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.190519 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:47Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.193984 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.194018 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.194027 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.194041 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.194053 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.204467 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:47Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.207634 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.207674 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.207685 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.207702 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.207711 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.220134 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:47Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.220260 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.221808 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.221829 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.221842 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.221861 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.221876 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.282091 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.282131 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.282131 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.282116 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.282260 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.282328 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.282428 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:47 crc kubenswrapper[5031]: E0129 08:39:47.282477 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.324203 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.324243 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.324253 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.324269 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.324279 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.426821 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.426853 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.426864 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.426878 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.426888 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.528954 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.528991 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.529001 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.529016 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.529026 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.631289 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.631338 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.631350 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.631410 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.631424 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.733104 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.733153 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.733167 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.733183 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.733195 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.805569 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:39:28.208520621 +0000 UTC Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.835379 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.835427 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.835437 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.835451 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.835460 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.937718 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.937756 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.937765 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.937778 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:47 crc kubenswrapper[5031]: I0129 08:39:47.937789 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:47Z","lastTransitionTime":"2026-01-29T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.040101 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.040137 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.040148 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.040161 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.040169 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.142808 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.142847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.142856 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.142869 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.142878 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.244960 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.244995 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.245005 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.245019 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.245050 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.347756 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.347794 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.347805 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.347820 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.347830 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.450227 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.450272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.450284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.450297 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.450306 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.552573 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.552605 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.552617 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.552634 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.552646 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.655662 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.655702 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.655713 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.655728 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.655739 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.757942 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.757989 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.758001 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.758052 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.758074 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.806381 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:51:29.855996642 +0000 UTC Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.860495 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.860522 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.860531 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.860544 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.860553 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.962715 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.962747 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.962756 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.962770 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:48 crc kubenswrapper[5031]: I0129 08:39:48.962779 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:48Z","lastTransitionTime":"2026-01-29T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.064805 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.064847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.064859 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.064873 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.064885 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.167053 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.167090 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.167101 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.167118 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.167129 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.269847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.269888 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.269897 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.269911 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.269920 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.282415 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.282444 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.282453 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.282424 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:49 crc kubenswrapper[5031]: E0129 08:39:49.282536 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:49 crc kubenswrapper[5031]: E0129 08:39:49.282672 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:49 crc kubenswrapper[5031]: E0129 08:39:49.282701 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:49 crc kubenswrapper[5031]: E0129 08:39:49.282831 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.372310 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.372357 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.372387 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.372405 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.372416 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.474683 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.474729 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.474741 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.474793 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.474805 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.576926 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.576967 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.576980 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.576996 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.577008 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.679749 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.679793 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.679805 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.679820 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.679833 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.782591 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.782628 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.782640 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.782654 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.782663 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.807032 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:08:40.507343858 +0000 UTC Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.884626 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.884704 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.884714 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.884762 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.884774 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.986930 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.986962 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.986973 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.986990 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:49 crc kubenswrapper[5031]: I0129 08:39:49.987001 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:49Z","lastTransitionTime":"2026-01-29T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.088723 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.088764 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.088774 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.088788 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.088799 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.191091 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.191126 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.191139 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.191156 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.191168 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.293614 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.293644 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.293652 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.293665 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.293674 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.299949 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.312362 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.325088 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.340718 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.352189 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.363539 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.375290 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.386558 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.396594 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.396630 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.396638 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.396653 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.396663 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.397773 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.410481 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.422178 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.434418 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.444564 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.455036 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.465317 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.483779 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.493694 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.499201 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.499228 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.499237 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.499253 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.499271 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.601444 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.601487 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.601500 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.601518 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.601549 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.703494 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.703527 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.703536 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.703548 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.703559 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.805733 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.805779 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.805787 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.805800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.805810 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.807940 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:19:19.111063662 +0000 UTC Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.908022 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.908074 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.908085 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.908102 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:50 crc kubenswrapper[5031]: I0129 08:39:50.908113 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:50Z","lastTransitionTime":"2026-01-29T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.010178 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.010222 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.010239 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.010255 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.010265 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.112498 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.112534 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.112543 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.112557 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.112565 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.215086 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.215127 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.215135 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.215148 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.215157 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.281715 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.281776 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.281840 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:51 crc kubenswrapper[5031]: E0129 08:39:51.281843 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.281856 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:51 crc kubenswrapper[5031]: E0129 08:39:51.281914 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:51 crc kubenswrapper[5031]: E0129 08:39:51.282073 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:51 crc kubenswrapper[5031]: E0129 08:39:51.282187 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.317229 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.317270 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.317280 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.317297 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.317309 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.420632 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.420689 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.420702 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.420721 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.420733 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.523049 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.523097 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.523109 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.523127 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.523140 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.625629 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.625681 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.625691 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.625704 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.625712 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.727964 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.728015 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.728026 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.728042 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.728053 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.808780 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:59:33.374480848 +0000 UTC Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.829498 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.829547 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.829559 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.829576 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.829630 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.931928 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.931977 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.931994 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.932016 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:51 crc kubenswrapper[5031]: I0129 08:39:51.932032 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:51Z","lastTransitionTime":"2026-01-29T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.034238 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.034298 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.034308 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.034324 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.034335 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.136208 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.136262 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.136272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.136289 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.136301 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.239217 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.239277 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.239291 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.239309 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.239329 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.282991 5031 scope.go:117] "RemoveContainer" containerID="f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b" Jan 29 08:39:52 crc kubenswrapper[5031]: E0129 08:39:52.283223 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.306411 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:52 crc kubenswrapper[5031]: E0129 08:39:52.306569 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:52 crc kubenswrapper[5031]: E0129 08:39:52.306619 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:40:24.306603965 +0000 UTC m=+104.806191917 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.341356 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.341408 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.341420 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.341434 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.341446 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.444079 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.444118 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.444127 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.444143 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.444153 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.546927 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.546966 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.546978 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.546993 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.547003 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.649772 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.649819 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.649833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.649851 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.649863 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.753261 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.753566 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.754211 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.754285 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.754296 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.808899 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:57:20.064952632 +0000 UTC Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.857346 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.857408 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.857429 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.857445 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.857456 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.959438 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.959475 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.959486 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.959499 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:52 crc kubenswrapper[5031]: I0129 08:39:52.959508 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:52Z","lastTransitionTime":"2026-01-29T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.062233 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.062270 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.062280 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.062295 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.062308 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.164564 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.164606 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.164618 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.164636 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.164649 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.267136 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.267173 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.267183 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.267198 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.267207 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.282446 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.282466 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.282480 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.282446 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:53 crc kubenswrapper[5031]: E0129 08:39:53.282556 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:53 crc kubenswrapper[5031]: E0129 08:39:53.282711 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:53 crc kubenswrapper[5031]: E0129 08:39:53.282794 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:53 crc kubenswrapper[5031]: E0129 08:39:53.282845 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.369145 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.369183 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.369191 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.369205 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.369218 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.471277 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.471319 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.471331 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.471346 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.471357 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.573301 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.573343 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.573353 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.573371 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.573381 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.675939 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.675980 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.675990 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.676005 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.676015 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.778605 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.778639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.778647 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.778660 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.778668 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.809847 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:08:49.609405776 +0000 UTC Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.834720 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/0.log" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.834762 5031 generic.go:334] "Generic (PLEG): container finished" podID="e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad" containerID="58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558" exitCode=1 Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.834791 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerDied","Data":"58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.835132 5031 scope.go:117] "RemoveContainer" containerID="58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.846894 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.887819 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.887867 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.887879 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.887920 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.887933 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.893883 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.922199 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.933358 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.944292 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.954478 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.964549 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.978365 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.987279 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.990048 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.990364 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.990373 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.990401 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.990410 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:53Z","lastTransitionTime":"2026-01-29T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:53 crc kubenswrapper[5031]: I0129 08:39:53.996287 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.003639 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.014202 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"2026-01-29T08:39:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8\\\\n2026-01-29T08:39:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8 to /host/opt/cni/bin/\\\\n2026-01-29T08:39:08Z [verbose] multus-daemon started\\\\n2026-01-29T08:39:08Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:39:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.021829 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.032487 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.042108 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.051663 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.059530 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.093050 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.093095 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.093111 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.093135 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.093148 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.195890 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.195923 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.195931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.195944 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.195953 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.298214 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.298251 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.298262 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.298277 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.298287 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.401309 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.401410 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.401437 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.401466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.401489 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.504231 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.504304 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.504318 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.504334 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.504345 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.607073 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.607118 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.607131 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.607148 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.607167 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.709637 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.709690 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.709704 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.709724 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.709738 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.810565 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:04:13.153482941 +0000 UTC Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.812618 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.812656 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.812670 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.812687 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.812698 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.839579 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/0.log" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.839647 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerStarted","Data":"d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.854228 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.868652 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.879144 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.891315 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.901592 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.911223 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.914776 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.914813 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.914824 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.914840 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.914852 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:54Z","lastTransitionTime":"2026-01-29T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.923040 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.933920 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.945502 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"2026-01-29T08:39:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8\\\\n2026-01-29T08:39:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8 to /host/opt/cni/bin/\\\\n2026-01-29T08:39:08Z [verbose] multus-daemon started\\\\n2026-01-29T08:39:08Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:39:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.958533 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.970223 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.981990 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:54 crc kubenswrapper[5031]: I0129 08:39:54.992032 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:54Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.003893 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:55Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.014184 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:55Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.016829 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.016878 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.016886 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.016901 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.016910 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.039867 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:55Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.051650 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:55Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.118816 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.119136 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.119225 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.119310 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.119410 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.225707 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.225751 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.225772 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.225791 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.225804 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.281507 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:55 crc kubenswrapper[5031]: E0129 08:39:55.281636 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.281861 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:55 crc kubenswrapper[5031]: E0129 08:39:55.281931 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.282125 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.282254 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:55 crc kubenswrapper[5031]: E0129 08:39:55.282317 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:55 crc kubenswrapper[5031]: E0129 08:39:55.282672 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.333692 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.333736 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.333745 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.333760 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.333769 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.436584 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.436621 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.436639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.436657 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.436669 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.539363 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.539435 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.539451 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.539469 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.539481 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.641944 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.641970 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.641979 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.641991 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.641999 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.744316 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.744368 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.744396 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.744413 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.744426 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.811054 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:29:47.603712672 +0000 UTC Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.846848 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.846911 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.846922 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.846935 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.846944 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.949488 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.949530 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.949542 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.949557 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:55 crc kubenswrapper[5031]: I0129 08:39:55.949569 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:55Z","lastTransitionTime":"2026-01-29T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.051653 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.051680 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.051687 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.051699 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.051709 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.154089 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.154344 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.154449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.154521 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.154578 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.256791 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.256828 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.256836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.256849 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.256858 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.358957 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.359005 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.359021 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.359041 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.359056 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.460965 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.460998 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.461008 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.461021 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.461030 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.564443 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.564485 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.564497 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.564513 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.564523 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.667287 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.667351 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.667404 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.667431 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.667448 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.769210 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.769260 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.769280 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.769303 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.769321 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.811920 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:38:31.559006655 +0000 UTC Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.872304 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.872390 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.872405 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.872423 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.872435 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.974881 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.974948 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.974971 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.974999 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:56 crc kubenswrapper[5031]: I0129 08:39:56.975024 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:56Z","lastTransitionTime":"2026-01-29T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.077318 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.077360 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.077406 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.077425 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.077439 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.180684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.180741 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.180760 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.180790 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.180814 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.281945 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.282001 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.281957 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.281957 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.282168 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.282221 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.282322 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.282416 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.283423 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.283449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.283457 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.283468 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.283477 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.386647 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.386698 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.386714 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.386735 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.386751 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.390227 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.390263 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.390280 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.390299 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.390314 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.403513 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:57Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.407769 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.407842 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.407859 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.407882 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.407905 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.423496 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:57Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.426806 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.426875 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.426909 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.426951 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.426978 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.444362 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:57Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.448292 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.448338 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.448355 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.448406 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.448424 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.465884 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:57Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.472291 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.472346 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.472359 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.472397 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.472415 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.492651 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:57Z is after 2025-08-24T17:21:41Z" Jan 29 08:39:57 crc kubenswrapper[5031]: E0129 08:39:57.493359 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.495403 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.495462 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.495479 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.495502 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.495522 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.597982 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.598316 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.598566 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.598704 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.598824 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.702455 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.702498 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.702512 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.702528 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.702543 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.804537 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.804592 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.804612 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.804640 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.804660 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.813286 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:23:24.667329863 +0000 UTC Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.907089 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.907143 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.907164 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.907199 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:57 crc kubenswrapper[5031]: I0129 08:39:57.907233 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:57Z","lastTransitionTime":"2026-01-29T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.010717 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.010818 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.010839 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.010871 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.010890 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.113947 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.114284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.114438 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.114512 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.114589 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.217266 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.217321 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.217334 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.217352 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.217394 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.320521 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.320565 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.320574 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.320598 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.320607 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.423574 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.423613 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.423623 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.423639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.423651 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.525882 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.525954 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.525969 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.525992 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.526007 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.629987 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.630072 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.630091 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.630122 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.630141 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.732702 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.732740 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.732748 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.732763 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.732773 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.813706 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:47:49.067942218 +0000 UTC Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.835939 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.835987 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.835999 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.836015 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.836027 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.938135 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.938166 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.938201 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.938217 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:58 crc kubenswrapper[5031]: I0129 08:39:58.938226 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:58Z","lastTransitionTime":"2026-01-29T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.040748 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.040833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.040860 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.040892 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.040915 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.144211 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.144261 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.144288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.144307 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.144320 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.246965 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.247010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.247024 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.247042 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.247057 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.281524 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:39:59 crc kubenswrapper[5031]: E0129 08:39:59.281666 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.281719 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:39:59 crc kubenswrapper[5031]: E0129 08:39:59.281899 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.281996 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:39:59 crc kubenswrapper[5031]: E0129 08:39:59.282104 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.282155 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:39:59 crc kubenswrapper[5031]: E0129 08:39:59.282243 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.349794 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.349850 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.349865 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.349885 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.349900 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.452425 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.452466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.452474 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.452489 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.452497 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.555049 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.555090 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.555103 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.555119 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.555129 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.657830 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.657862 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.657871 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.657888 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.657897 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.761190 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.761241 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.761259 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.761282 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.761302 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.814527 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:56:56.437773094 +0000 UTC Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.864289 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.864352 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.864409 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.864437 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.864454 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.967027 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.967128 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.967145 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.967164 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:39:59 crc kubenswrapper[5031]: I0129 08:39:59.967178 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:39:59Z","lastTransitionTime":"2026-01-29T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.069891 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.070193 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.070328 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.070470 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.072464 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.175639 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.175868 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.175968 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.176050 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.176137 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.278354 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.278628 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.278717 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.278809 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.278879 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.296027 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.310649 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.329741 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.345012 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.358769 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.371235 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.381250 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.381284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.381295 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.381312 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.381322 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.384473 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.398744 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.409059 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.417311 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.424031 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.434340 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"2026-01-29T08:39:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8\\\\n2026-01-29T08:39:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8 to /host/opt/cni/bin/\\\\n2026-01-29T08:39:08Z [verbose] multus-daemon started\\\\n2026-01-29T08:39:08Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:39:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.442944 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.454899 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.465842 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.478138 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.485246 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.485275 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.485283 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.485295 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.485303 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.488796 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:00Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.587932 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.587999 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.588009 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.588024 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.588034 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.690188 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.690223 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.690231 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.690243 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.690251 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.793403 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.793452 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.793464 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.793480 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.793493 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.814690 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:21:07.046387395 +0000 UTC Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.895969 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.896011 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.896024 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.896041 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.896054 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.998502 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.998539 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.998547 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.998562 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:00 crc kubenswrapper[5031]: I0129 08:40:00.998571 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:00Z","lastTransitionTime":"2026-01-29T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.101455 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.101489 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.101497 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.101509 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.101518 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.203958 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.204021 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.204038 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.204061 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.204075 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.282666 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.282704 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.282782 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:01 crc kubenswrapper[5031]: E0129 08:40:01.282820 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.282678 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:01 crc kubenswrapper[5031]: E0129 08:40:01.282916 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:01 crc kubenswrapper[5031]: E0129 08:40:01.283127 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:01 crc kubenswrapper[5031]: E0129 08:40:01.283262 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.306496 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.306547 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.306564 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.306585 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.306602 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.409144 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.409184 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.409193 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.409210 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.409220 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.511241 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.511276 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.511288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.511305 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.511316 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.614989 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.615038 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.615047 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.615061 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.615073 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.717577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.717610 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.717620 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.717644 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.717656 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.815207 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:13:59.538749269 +0000 UTC Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.820389 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.820454 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.820463 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.820476 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.820485 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.923047 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.923110 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.923131 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.923154 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:01 crc kubenswrapper[5031]: I0129 08:40:01.923170 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:01Z","lastTransitionTime":"2026-01-29T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.025198 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.025255 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.025274 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.025298 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.025317 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.127989 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.128051 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.128060 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.128073 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.128083 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.232898 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.232983 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.233007 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.233037 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.233073 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.336220 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.336258 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.336269 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.336284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.336297 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.438972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.439212 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.439302 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.439425 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.439552 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.542807 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.542894 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.542926 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.543026 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.543068 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.645784 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.645825 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.645836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.645857 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.645869 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.748487 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.748531 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.748540 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.748554 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.748564 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.815400 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:32:49.768405235 +0000 UTC Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.850886 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.850937 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.850947 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.850961 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.850970 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.953571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.953616 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.953646 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.953665 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:02 crc kubenswrapper[5031]: I0129 08:40:02.953678 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:02Z","lastTransitionTime":"2026-01-29T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.056751 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.056796 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.056811 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.056833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.056849 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.160272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.160321 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.160338 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.160414 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.160448 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.263088 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.263158 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.263177 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.263199 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.263216 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.281695 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.281756 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.281825 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:03 crc kubenswrapper[5031]: E0129 08:40:03.281917 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.281927 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:03 crc kubenswrapper[5031]: E0129 08:40:03.281993 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:03 crc kubenswrapper[5031]: E0129 08:40:03.282194 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:03 crc kubenswrapper[5031]: E0129 08:40:03.282414 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.365744 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.365813 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.365833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.365856 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.365871 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.468385 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.468444 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.468457 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.468472 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.468483 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.571232 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.571283 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.571303 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.571325 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.571340 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.675122 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.675185 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.675208 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.675233 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.675254 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.777847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.777951 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.777978 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.778010 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.778036 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.815944 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:54:44.852443961 +0000 UTC Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.880716 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.880787 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.880814 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.880848 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.880868 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.984441 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.984497 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.984505 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.984518 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:03 crc kubenswrapper[5031]: I0129 08:40:03.984529 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:03Z","lastTransitionTime":"2026-01-29T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.087767 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.087823 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.087834 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.087854 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.087869 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.190488 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.190538 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.190550 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.190568 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.190581 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.237565 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.237684 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.237718 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.237825 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:08.237789982 +0000 UTC m=+148.737377964 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.237844 5031 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.237903 5031 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.237917 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:08.237891765 +0000 UTC m=+148.737479757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.238057 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.238129 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238251 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238253 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238253 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:08.238191863 +0000 UTC m=+148.737779855 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238270 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238342 5031 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238283 5031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238443 5031 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238481 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:08.238451461 +0000 UTC m=+148.738039453 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:40:04 crc kubenswrapper[5031]: E0129 08:40:04.238532 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:08.238511152 +0000 UTC m=+148.738099144 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.295523 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.295588 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.295602 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.295618 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.295630 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.398261 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.398312 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.398323 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.398341 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.398353 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.500856 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.500914 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.500926 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.500953 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.500967 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.605681 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.605737 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.605754 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.605774 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.605789 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.709318 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.709405 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.709423 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.709445 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.709461 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.811858 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.811931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.811953 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.811983 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.812004 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.817164 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:58:57.696448164 +0000 UTC Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.914406 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.914440 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.914448 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.914463 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:04 crc kubenswrapper[5031]: I0129 08:40:04.914481 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:04Z","lastTransitionTime":"2026-01-29T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.017241 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.017279 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.017291 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.017308 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.017320 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.122774 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.122861 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.122882 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.122906 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.122994 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.225773 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.225822 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.225836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.225855 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.225868 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.282403 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:05 crc kubenswrapper[5031]: E0129 08:40:05.282545 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.282727 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:05 crc kubenswrapper[5031]: E0129 08:40:05.282785 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.282908 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:05 crc kubenswrapper[5031]: E0129 08:40:05.282963 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.283091 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:05 crc kubenswrapper[5031]: E0129 08:40:05.283158 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.292850 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.327686 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.327742 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.327758 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.327779 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.327798 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.395075 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.430434 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.430479 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.430490 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.430505 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.430516 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.533484 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.533546 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.533563 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.533588 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.533611 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.636792 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.636849 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.636867 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.636891 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.636908 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.739443 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.739481 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.739492 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.739509 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.739521 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.817653 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:47:06.761506986 +0000 UTC Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.842134 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.842187 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.842199 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.842216 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.842231 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.944901 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.944964 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.944981 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.945014 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:05 crc kubenswrapper[5031]: I0129 08:40:05.945049 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:05Z","lastTransitionTime":"2026-01-29T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.047202 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.047255 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.047266 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.047285 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.047297 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.149511 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.149571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.149594 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.149619 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.149637 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.252312 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.252355 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.252418 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.252438 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.252452 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.355044 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.355123 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.355145 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.355177 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.355203 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.458225 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.458275 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.458291 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.458307 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.458318 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.562662 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.562690 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.562698 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.562717 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.562727 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.665606 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.665633 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.665641 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.665654 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.665662 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.768755 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.768811 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.768826 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.768844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.768864 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.818485 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:56:03.265599016 +0000 UTC Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.872137 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.872196 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.872213 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.872239 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.872333 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.975543 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.975608 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.975626 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.975655 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:06 crc kubenswrapper[5031]: I0129 08:40:06.975674 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:06Z","lastTransitionTime":"2026-01-29T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.079429 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.079468 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.079478 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.079494 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.079503 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.182692 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.182737 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.182747 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.182767 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.182781 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.282602 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.282667 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.282722 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.282739 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.282788 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.282961 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.283437 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.283575 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.283707 5031 scope.go:117] "RemoveContainer" containerID="f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.285483 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.285559 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.285580 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.285607 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.285627 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.389976 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.390585 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.390684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.390778 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.390867 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.493871 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.493918 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.493930 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.493948 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.493961 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.596386 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.596436 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.596446 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.596463 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.596477 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.629842 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.629896 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.629909 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.629931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.629946 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.644311 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.649744 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.649776 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.649785 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.649800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.649812 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.662226 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.666313 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.666353 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.666362 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.666389 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.666398 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.685914 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.689352 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.689404 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.689417 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.689433 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.689442 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.701197 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.704147 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.704187 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.704195 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.704211 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.704220 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.715644 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1dd69a58-5dc0-47e9-a470-c4ae2c0f7e72\\\",\\\"systemUUID\\\":\\\"3666a2ab-1f8e-4807-b408-7fd2eb819480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: E0129 08:40:07.715752 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.716884 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.716909 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.716918 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.716929 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.716938 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.818576 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:24:26.755981016 +0000 UTC Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.818697 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.818729 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.818737 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.818749 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.818758 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.890197 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/2.log" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.903155 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.903707 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.920438 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.920475 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.920484 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.920497 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.920508 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:07Z","lastTransitionTime":"2026-01-29T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.924759 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c8c0a2b-03ee-470f-a6c4-129bbf1088a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cc720ab47dc43d10fb8d4518891fa77ad4a77c202f81f7052295cffe3192b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed27407f74e0e42326f42118c3a585ceaca50f845d98fbd925b441588c376916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e18c4e94401e069ecbb55ee30edae67591da008ce0b9aededca0e164ddd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbf06e68778d628ad3f1e9788fd5561af77781cb1ea44a75bb365c164747a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b92ae3d176121c1c6dc75aad307d0025b046b1116b47b5fac22db95279e7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.941193 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.953582 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.965034 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.976757 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"2026-01-29T08:39:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8\\\\n2026-01-29T08:39:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8 to /host/opt/cni/bin/\\\\n2026-01-29T08:39:08Z [verbose] multus-daemon started\\\\n2026-01-29T08:39:08Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:39:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.986311 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:07 crc kubenswrapper[5031]: I0129 08:40:07.997891 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.009277 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.023106 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.023143 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.023153 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.023168 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.023179 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.023725 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.035730 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.048168 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8008b6e-690d-49dd-a77f-f90d2b5029ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e37c385b7c817a87383778a95e8692b36950e82793bc221a7b1eb04083b132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.062768 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.074730 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.090942 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.101498 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.114514 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.125136 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.125163 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.125171 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.125182 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.125191 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.126395 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.135606 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.147522 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.227781 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.227823 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.227831 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.227847 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.227856 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.330093 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.330144 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.330153 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.330167 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.330176 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.432745 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.432782 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.432790 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.432805 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.432813 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.535614 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.535675 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.535692 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.535716 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.535733 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.638800 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.638848 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.638860 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.638882 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.638897 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.740860 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.740922 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.740931 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.740945 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.740953 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.818996 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:49:40.45703789 +0000 UTC Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.844150 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.844254 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.844284 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.844318 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.844345 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.909293 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/3.log" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.910922 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/2.log" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.915354 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" exitCode=1 Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.915452 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.915529 5031 scope.go:117] "RemoveContainer" containerID="f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.916328 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:40:08 crc kubenswrapper[5031]: E0129 08:40:08.916551 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.943397 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.952168 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.953270 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.953281 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.953298 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.953307 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:08Z","lastTransitionTime":"2026-01-29T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.961699 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.981117 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:08 crc kubenswrapper[5031]: I0129 08:40:08.992274 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:08Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.005583 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8008b6e-690d-49dd-a77f-f90d2b5029ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e37c385b7c817a87383778a95e8692b36950e82793bc221a7b1eb04083b132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.017486 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.031252 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.053423 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01113f48c30a687bba79881433045f9dbfb179004dd1f596c79e360cb27054b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:35Z\\\",\\\"message\\\":\\\"rue skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 08:39:35.171465 6742 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF0129 08:39:35.171469 6742 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:39:35Z is after 2025-08-24T17:21:41Z]\\\\nI0129 08:39:35.171481 6742 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnost\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:40:08Z\\\",\\\"message\\\":\\\"operator-lifecycle-manager for network=default : 1.790852ms\\\\nI0129 08:40:08.044602 7200 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0129 08:40:08.044625 7200 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0129 08:40:08.044662 7200 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 08:40:08.044684 7200 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 08:40:08.044139 7200 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}\\\\nI0129 08:40:08.044920 7200 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator for network=default : 1.319848ms\\\\nI0129 08:40:08.044937 7200 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 08:40:08.045005 7200 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 08:40:08.045035 7200 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:40:08.045059 7200 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:40:08.045116 7200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.055466 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.055497 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.055507 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.055522 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.055533 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.068591 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.082906 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.102215 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.115251 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.129185 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.151063 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c8c0a2b-03ee-470f-a6c4-129bbf1088a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cc720ab47dc43d10fb8d4518891fa77ad4a77c202f81f7052295cffe3192b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed27407f74e0e42326f42118c3a585ceaca50f845d98fbd925b441588c376916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e18c4e94401e069ecbb55ee30edae67591da008ce0b9aededca0e164ddd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbf06e68778d628ad3f1e9788fd5561af77781cb1ea44a75bb365c164747a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b92ae3d176121c1c6dc75aad307d0025b046b1116b47b5fac22db95279e7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.157488 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.157530 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.157542 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.157558 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.157569 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.166171 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.181339 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.191389 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.205848 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"2026-01-29T08:39:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8\\\\n2026-01-29T08:39:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8 to /host/opt/cni/bin/\\\\n2026-01-29T08:39:08Z [verbose] multus-daemon started\\\\n2026-01-29T08:39:08Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:39:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.218792 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.260623 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.260672 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.260684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.260702 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.260716 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.281930 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.282000 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.282048 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.282085 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:09 crc kubenswrapper[5031]: E0129 08:40:09.282070 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:09 crc kubenswrapper[5031]: E0129 08:40:09.282191 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:09 crc kubenswrapper[5031]: E0129 08:40:09.282314 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:09 crc kubenswrapper[5031]: E0129 08:40:09.282514 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.363920 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.363958 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.363970 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.363994 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.364006 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.466959 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.467004 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.467019 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.467043 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.467057 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.569608 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.569677 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.569694 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.569721 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.569743 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.672950 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.673011 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.673083 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.673105 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.673123 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.778465 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.778519 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.778537 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.778560 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.778579 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.819186 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:28:57.780022631 +0000 UTC Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.881616 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.881684 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.881703 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.881728 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.881746 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.921270 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/3.log" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.927298 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:40:09 crc kubenswrapper[5031]: E0129 08:40:09.927873 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.941971 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8008b6e-690d-49dd-a77f-f90d2b5029ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e37c385b7c817a87383778a95e8692b36950e82793bc221a7b1eb04083b132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e20ebd07e552025f4a3601008a1316aeb341b3923b3a836eeaf80e6c3c501400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.957829 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.972994 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.985421 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.985488 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.985510 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.985541 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:09 crc kubenswrapper[5031]: I0129 08:40:09.985563 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:09Z","lastTransitionTime":"2026-01-29T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.001080 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afca9b4-a79c-40db-8c5f-0369e09228b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:40:08Z\\\",\\\"message\\\":\\\"operator-lifecycle-manager for network=default : 1.790852ms\\\\nI0129 08:40:08.044602 7200 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0129 08:40:08.044625 7200 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0129 08:40:08.044662 7200 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 08:40:08.044684 7200 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 08:40:08.044139 7200 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}\\\\nI0129 08:40:08.044920 7200 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator for network=default : 1.319848ms\\\\nI0129 08:40:08.044937 7200 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 08:40:08.045005 7200 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 08:40:08.045035 7200 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:40:08.045059 7200 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:40:08.045116 7200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:40:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sl9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f7pds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.017305 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9eb5a11b-e97b-490e-947f-c5ee889e3391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4740ecd01274f82bd3ad39d754c255ad4d21b385448161be73d9c935edd0385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb0c3e2bc530d949f2724bee8f8bd81d935ddf98f369965f516b46d266b5074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ffnzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.035742 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.051763 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.112614 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.112698 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.112717 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.112745 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.112763 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.124981 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.150187 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.176343 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c8c0a2b-03ee-470f-a6c4-129bbf1088a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cc720ab47dc43d10fb8d4518891fa77ad4a77c202f81f7052295cffe3192b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed27407f74e0e42326f42118c3a585ceaca50f845d98fbd925b441588c376916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e18c4e94401e069ecbb55ee30edae67591da008ce0b9aededca0e164ddd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbf06e68778d628ad3f1e9788fd5561af77781cb1ea44a75bb365c164747a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b92ae3d176121c1c6dc75aad307d0025b046b1116b47b5fac22db95279e7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.191059 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.204361 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.215695 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.215782 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.215802 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.215970 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-588df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04461ac-be4b-4e84-bb3f-ccef0e9b649d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c60e638d3b7e57c8cf1a5e116cfa7517f849d9d39a829d1c118c76b5ff8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-588df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.216264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.216558 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.231489 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ghc5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:39:53Z\\\",\\\"message\\\":\\\"2026-01-29T08:39:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8\\\\n2026-01-29T08:39:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8d1fb9f7-a29a-45bd-b822-c36fb0256bc8 to /host/opt/cni/bin/\\\\n2026-01-29T08:39:08Z [verbose] multus-daemon started\\\\n2026-01-29T08:39:08Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:39:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ghc5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.245006 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.260532 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02d75cf5-b6e2-4154-ba13-d7ce17d37394\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 08:39:00.461855 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 08:39:00.462116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:39:00.463497 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1384013403/tls.crt::/tmp/serving-cert-1384013403/tls.key\\\\\\\"\\\\nI0129 08:39:00.910977 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:39:00.913935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:39:00.913954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:39:00.913978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:39:00.913985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:39:00.921613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 08:39:00.921631 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 08:39:00.921646 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921653 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:39:00.921660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:39:00.921667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:39:00.921673 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:39:00.921678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 08:39:00.923130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.272664 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31f50a32-a913-456d-ae3c-a23edd836461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f87eb4041da4e867f85ef3d60ce0fbb4c2e2dbb5dfeb1fe85c7d79a0196d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa626399f3372407754f195363ec94bbf832c1b686ad6b18a5f7a05e375c5366\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f761231872d1faa60646c591951ce1dc35e445a5f10cbc0956cc607ce3f88eea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.287949 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c68c35f4bda420e489b22e62a91721e5ad565a13f16e6c582ea1b1375ec5ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.302666 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20a410c7-0476-4e62-9ee1-5fb6998f308f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvvjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wnmhx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.315195 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d42fb1edefe6926edc23f7b6829f3e335668a66421d7a8bf7070030c49428697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c8e83d3491e4bdd166780da460c84eea112f0def64a1df80afbab567680ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.318935 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.318999 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.319013 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.319028 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.319040 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.327146 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.336513 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"458f6239-f61f-4283-b420-460b3fe9cf09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd869b118511e9db3462830060c37cc095220be064e2e2549a6380effb011c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6hrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.351126 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad8aa59f-a0fb-4a05-ae89-948075794ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bcdbf954feffee5cbf73a6374be5f29783fc90d600c4a398c3f3988c89fe5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345a5e0443372262dd75db3a7388bac041b352b63d6fa537e2eea331494979c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e5264b186a71312d5b1c01e47c4e8d1d83d5fb4a76040fe313117221c0f974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83fb2f2c4e4301f2594ecd52d538175b282cb624e548577e09bfec5c8090796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73fe78d412303b26f85077fa015651adc3ef709bbdf2f23c4c45aea9b1e6b619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e560ff69a44dff166d51af5d21db5d6a2ea3659b85e714c117eda6c74d9fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3496b68fcb7d7a46afaf26ebc4aa645caa9df027c27fdb80908cae9c661bca0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvz4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfrbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.362187 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rq2c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5b1bdd-3228-49a3-8757-ca54e54430d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c2337118fa2683d939f4913fd71770645e713e07608119a25188d89f7d4d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5w588\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rq2c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.378950 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c8c0a2b-03ee-470f-a6c4-129bbf1088a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cc720ab47dc43d10fb8d4518891fa77ad4a77c202f81f7052295cffe3192b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed27407f74e0e42326f42118c3a585ceaca50f845d98fbd925b441588c376916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e18c4e94401e069ecbb55ee30edae67591da008ce0b9aededca0e164ddd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbf06e68778d628ad3f1e9788fd5561af77781cb1ea44a75bb365c164747a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b92ae3d176121c1c6dc75aad307d0025b046b1116b47b5fac22db95279e7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88e1734aaa72152bc739d0f092ab2dd86228795118dabddae18c26c8104cf2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9449c1c929d06b6078c56ebf204db8a47b381794bb2dac31d48e351ff20fed70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a69b08e535d57dec901b414fc763471ffc90a90664a9d99f062ca336b3992dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.395671 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b44cd28d-dd93-4b06-80c5-d1f869527176\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bd36f8fa19b96c1f27800da3e896c5419eb278d828de9dda971b9877bfe09f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://474a5aa2c8f511b03a32b6886bbd23cfae7801955b756cfc6dc6c4fb825ee52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6787b30453a23234e9c6b2bb3125541a0cb427a4db8f58bb27eaa0d03f440ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76cfbee28f3724dbd8aabb85214dd2faaa83930ad252c5127cc1f1ba3051ca55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.409145 5031 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:39:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcf0295e9ca79b5151635023f07166d4ae6d370443fb9a53c6222c45a86708ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:40:10Z is after 2025-08-24T17:21:41Z" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.422461 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.422515 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.422529 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.422545 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.422554 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.448229 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-588df" podStartSLOduration=64.448206258 podStartE2EDuration="1m4.448206258s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:10.431821821 +0000 UTC m=+90.931409783" watchObservedRunningTime="2026-01-29 08:40:10.448206258 +0000 UTC m=+90.947794210" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.448412 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ghc5v" podStartSLOduration=64.448406614 podStartE2EDuration="1m4.448406614s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:10.447948831 +0000 UTC m=+90.947536783" watchObservedRunningTime="2026-01-29 08:40:10.448406614 +0000 UTC m=+90.947994566" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.484276 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.484252898 podStartE2EDuration="1m9.484252898s" podCreationTimestamp="2026-01-29 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:10.466918223 +0000 UTC m=+90.966506175" watchObservedRunningTime="2026-01-29 08:40:10.484252898 +0000 UTC m=+90.983840850" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.484452 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.484444783 podStartE2EDuration="1m5.484444783s" podCreationTimestamp="2026-01-29 08:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:10.483824876 +0000 UTC m=+90.983412828" watchObservedRunningTime="2026-01-29 08:40:10.484444783 +0000 UTC m=+90.984032755" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.524924 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.524962 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.525011 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.525050 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.525063 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.536928 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.536905352 podStartE2EDuration="5.536905352s" podCreationTimestamp="2026-01-29 08:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:10.523356645 +0000 UTC m=+91.022944597" watchObservedRunningTime="2026-01-29 08:40:10.536905352 +0000 UTC m=+91.036493304" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.600619 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ffnzh" podStartSLOduration=64.60059602 podStartE2EDuration="1m4.60059602s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:10.600009684 +0000 UTC m=+91.099597636" watchObservedRunningTime="2026-01-29 08:40:10.60059602 +0000 UTC m=+91.100184012" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.628325 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.628387 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.628400 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.628418 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.628431 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.730579 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.730622 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.730634 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.730650 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.730661 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.820081 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:03:25.865278976 +0000 UTC Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.832948 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.832997 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.833016 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.833033 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.833042 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.935512 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.935564 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.935577 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.935595 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:10 crc kubenswrapper[5031]: I0129 08:40:10.935608 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:10Z","lastTransitionTime":"2026-01-29T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.037752 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.037808 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.037817 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.037833 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.037844 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.139922 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.139960 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.139971 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.139986 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.139997 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.242076 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.242159 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.242179 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.242209 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.242244 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.281917 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.281968 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.282012 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.281917 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:11 crc kubenswrapper[5031]: E0129 08:40:11.282102 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:11 crc kubenswrapper[5031]: E0129 08:40:11.282271 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:11 crc kubenswrapper[5031]: E0129 08:40:11.282449 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:11 crc kubenswrapper[5031]: E0129 08:40:11.282602 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.345490 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.345571 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.345596 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.345627 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.345652 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.448036 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.448086 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.448097 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.448114 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.448126 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.550407 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.550489 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.550516 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.550547 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.550571 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.652675 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.652747 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.652769 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.652792 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.652809 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.755962 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.756024 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.756040 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.756061 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.756077 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.820667 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:31:12.527342416 +0000 UTC Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.858137 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.858167 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.858175 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.858192 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.858203 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.959880 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.959923 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.959935 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.959952 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:11 crc kubenswrapper[5031]: I0129 08:40:11.959963 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:11Z","lastTransitionTime":"2026-01-29T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.062251 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.062288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.062299 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.062314 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.062323 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.164636 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.164683 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.164701 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.164724 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.164741 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.268676 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.269065 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.269245 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.269478 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.269649 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.373247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.373301 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.373329 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.373355 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.373401 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.476762 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.476830 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.476845 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.476870 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.476887 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.580572 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.580633 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.580661 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.580699 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.580732 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.684558 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.684645 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.684670 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.684699 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.684720 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.786908 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.786989 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.787002 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.787023 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.787039 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.821460 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:41:14.740866546 +0000 UTC Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.890605 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.890644 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.890654 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.890668 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.890677 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.993182 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.993221 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.993232 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.993249 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:12 crc kubenswrapper[5031]: I0129 08:40:12.993260 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:12Z","lastTransitionTime":"2026-01-29T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.095626 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.095683 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.095692 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.095706 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.095714 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.198682 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.198711 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.198720 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.198732 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.198741 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.281983 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.282128 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.282144 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:13 crc kubenswrapper[5031]: E0129 08:40:13.282227 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:13 crc kubenswrapper[5031]: E0129 08:40:13.282435 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.282462 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:13 crc kubenswrapper[5031]: E0129 08:40:13.282532 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:13 crc kubenswrapper[5031]: E0129 08:40:13.282641 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.301547 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.301594 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.301609 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.301626 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.301639 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.404782 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.404848 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.404865 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.404889 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.404908 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.507425 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.507492 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.507530 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.507563 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.507588 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.610653 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.610713 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.610730 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.610757 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.610777 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.714785 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.714836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.714851 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.714870 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.714886 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.819985 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.820353 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.820397 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.820444 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.820459 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.822439 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:48:30.230712102 +0000 UTC Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.923455 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.923518 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.923536 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.923562 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:13 crc kubenswrapper[5031]: I0129 08:40:13.923586 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:13Z","lastTransitionTime":"2026-01-29T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.026118 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.026185 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.026198 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.026213 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.026226 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.128784 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.128850 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.128870 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.128896 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.128913 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.231844 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.231911 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.231928 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.231951 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.231970 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.335288 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.335350 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.335412 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.335442 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.335467 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.439180 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.439229 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.439252 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.439273 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.439285 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.541731 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.541812 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.541839 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.541868 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.541891 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.646179 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.646247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.646265 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.646293 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.646320 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.749182 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.749247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.749264 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.749290 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.749309 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.823051 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:04:41.770727037 +0000 UTC Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.852573 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.852634 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.852651 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.852675 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.852694 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.956512 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.956626 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.956655 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.956686 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:14 crc kubenswrapper[5031]: I0129 08:40:14.956709 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:14Z","lastTransitionTime":"2026-01-29T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.060044 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.060091 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.060104 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.060120 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.060132 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.163174 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.163229 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.163247 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.163272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.163288 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.265991 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.266048 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.266059 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.266078 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.266091 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.282224 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.282318 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.282251 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.282247 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:15 crc kubenswrapper[5031]: E0129 08:40:15.282417 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:15 crc kubenswrapper[5031]: E0129 08:40:15.282525 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:15 crc kubenswrapper[5031]: E0129 08:40:15.282561 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:15 crc kubenswrapper[5031]: E0129 08:40:15.282637 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.368474 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.368515 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.368532 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.368555 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.368570 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.471042 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.471114 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.471126 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.471141 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.471151 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.574598 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.574671 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.574695 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.574726 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.574782 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.677414 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.677461 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.677475 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.677495 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.677509 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.780346 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.780449 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.780465 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.780485 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.780498 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.823231 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:51:05.018531746 +0000 UTC Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.883221 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.883272 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.883281 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.883297 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.883307 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.985148 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.985179 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.985189 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.985202 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:15 crc kubenswrapper[5031]: I0129 08:40:15.985211 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:15Z","lastTransitionTime":"2026-01-29T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.087385 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.087424 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.087434 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.087455 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.087466 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.190458 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.190535 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.190557 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.190588 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.190610 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.293008 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.293055 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.293063 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.293078 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.293088 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.396897 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.396951 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.396962 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.396979 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.396990 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.499751 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.499801 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.499815 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.499836 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.499851 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.601911 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.601949 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.601958 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.601972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.601983 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.707804 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.707865 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.707887 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.707912 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.707930 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.810078 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.810134 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.810146 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.810162 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.810176 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.824309 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:54:24.798553848 +0000 UTC Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.913062 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.913102 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.913111 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.913124 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:16 crc kubenswrapper[5031]: I0129 08:40:16.913133 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:16Z","lastTransitionTime":"2026-01-29T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.015276 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.015351 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.015395 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.015412 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.015422 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.117298 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.117337 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.117348 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.117407 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.117420 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.220422 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.220471 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.220483 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.220495 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.220515 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.282172 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.282229 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:17 crc kubenswrapper[5031]: E0129 08:40:17.282344 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.282186 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.282450 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:17 crc kubenswrapper[5031]: E0129 08:40:17.282623 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:17 crc kubenswrapper[5031]: E0129 08:40:17.282869 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:17 crc kubenswrapper[5031]: E0129 08:40:17.282953 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.323575 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.323620 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.323631 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.323652 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.323664 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.425781 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.425835 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.425850 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.425868 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.425880 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.528592 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.528668 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.528693 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.528724 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.528749 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.631430 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.631469 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.631485 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.631507 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.631523 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.734454 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.734822 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.734991 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.735132 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.735273 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.825476 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:55:13.95260144 +0000 UTC Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.837675 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.837726 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.837739 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.837758 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.837771 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.940891 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.940942 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.940954 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.940972 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.940985 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.955715 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.955761 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.955775 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.955790 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.955801 5031 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:40:17Z","lastTransitionTime":"2026-01-29T08:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:40:17 crc kubenswrapper[5031]: I0129 08:40:17.999838 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq"] Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.000325 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.003136 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.003181 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.004610 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.004928 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.043486 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mfrbv" podStartSLOduration=72.043457995 podStartE2EDuration="1m12.043457995s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:18.026911082 +0000 UTC m=+98.526499034" watchObservedRunningTime="2026-01-29 08:40:18.043457995 +0000 UTC m=+98.543045977" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.091183 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb3f2e25-b0ba-47f9-8971-256107e61a50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.091246 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb3f2e25-b0ba-47f9-8971-256107e61a50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.091289 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb3f2e25-b0ba-47f9-8971-256107e61a50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.091329 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3f2e25-b0ba-47f9-8971-256107e61a50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.091386 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb3f2e25-b0ba-47f9-8971-256107e61a50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.109178 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podStartSLOduration=72.109153761 podStartE2EDuration="1m12.109153761s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:18.092623209 +0000 UTC m=+98.592211161" watchObservedRunningTime="2026-01-29 08:40:18.109153761 +0000 UTC m=+98.608741743" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.109333 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rq2c4" podStartSLOduration=72.109326406 podStartE2EDuration="1m12.109326406s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:18.108790071 +0000 UTC m=+98.608378023" watchObservedRunningTime="2026-01-29 08:40:18.109326406 +0000 UTC m=+98.608914398" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.142058 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=13.14203872 podStartE2EDuration="13.14203872s" podCreationTimestamp="2026-01-29 08:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:18.137151071 +0000 UTC m=+98.636739023" watchObservedRunningTime="2026-01-29 08:40:18.14203872 +0000 UTC m=+98.641626692" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.152311 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.152292124 podStartE2EDuration="49.152292124s" podCreationTimestamp="2026-01-29 08:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:18.152236672 +0000 UTC m=+98.651824634" watchObservedRunningTime="2026-01-29 08:40:18.152292124 +0000 UTC m=+98.651880076" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.192829 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb3f2e25-b0ba-47f9-8971-256107e61a50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.192883 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb3f2e25-b0ba-47f9-8971-256107e61a50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.192919 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb3f2e25-b0ba-47f9-8971-256107e61a50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.192962 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb3f2e25-b0ba-47f9-8971-256107e61a50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.193001 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3f2e25-b0ba-47f9-8971-256107e61a50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.193010 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb3f2e25-b0ba-47f9-8971-256107e61a50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.193057 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb3f2e25-b0ba-47f9-8971-256107e61a50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.194139 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb3f2e25-b0ba-47f9-8971-256107e61a50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.199105 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3f2e25-b0ba-47f9-8971-256107e61a50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.208471 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb3f2e25-b0ba-47f9-8971-256107e61a50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x8rgq\" (UID: \"cb3f2e25-b0ba-47f9-8971-256107e61a50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.314941 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.825839 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:48:30.056003206 +0000 UTC Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.825959 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.832796 5031 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.952491 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" event={"ID":"cb3f2e25-b0ba-47f9-8971-256107e61a50","Type":"ContainerStarted","Data":"f855edfe4e04635981f1d91501eeb06b4bd2ad1b0d67ed44f3101cacdadc38c0"} Jan 29 08:40:18 crc kubenswrapper[5031]: I0129 08:40:18.952556 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" event={"ID":"cb3f2e25-b0ba-47f9-8971-256107e61a50","Type":"ContainerStarted","Data":"e1d1832c0ce306d1f41c259b0cc0b6d18688990dbbaad409c4cb52d703733bbf"} Jan 29 08:40:19 crc kubenswrapper[5031]: I0129 08:40:19.281880 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:19 crc kubenswrapper[5031]: I0129 08:40:19.281879 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:19 crc kubenswrapper[5031]: E0129 08:40:19.282171 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:19 crc kubenswrapper[5031]: I0129 08:40:19.282223 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:19 crc kubenswrapper[5031]: E0129 08:40:19.282465 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:19 crc kubenswrapper[5031]: E0129 08:40:19.282583 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:19 crc kubenswrapper[5031]: I0129 08:40:19.282593 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:19 crc kubenswrapper[5031]: E0129 08:40:19.282755 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:21 crc kubenswrapper[5031]: I0129 08:40:21.281630 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:21 crc kubenswrapper[5031]: I0129 08:40:21.281671 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:21 crc kubenswrapper[5031]: I0129 08:40:21.281763 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:21 crc kubenswrapper[5031]: E0129 08:40:21.281885 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:21 crc kubenswrapper[5031]: I0129 08:40:21.281933 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:21 crc kubenswrapper[5031]: E0129 08:40:21.281980 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:21 crc kubenswrapper[5031]: E0129 08:40:21.282125 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:21 crc kubenswrapper[5031]: E0129 08:40:21.282253 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:23 crc kubenswrapper[5031]: I0129 08:40:23.281417 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:23 crc kubenswrapper[5031]: E0129 08:40:23.281525 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:23 crc kubenswrapper[5031]: I0129 08:40:23.281659 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:23 crc kubenswrapper[5031]: I0129 08:40:23.281932 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:23 crc kubenswrapper[5031]: I0129 08:40:23.281949 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:23 crc kubenswrapper[5031]: E0129 08:40:23.282063 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:23 crc kubenswrapper[5031]: E0129 08:40:23.282114 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:23 crc kubenswrapper[5031]: E0129 08:40:23.282176 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:23 crc kubenswrapper[5031]: I0129 08:40:23.282251 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:40:23 crc kubenswrapper[5031]: E0129 08:40:23.282420 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:40:24 crc kubenswrapper[5031]: I0129 08:40:24.355634 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:24 crc kubenswrapper[5031]: E0129 08:40:24.355866 5031 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:40:24 crc kubenswrapper[5031]: E0129 08:40:24.355938 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs podName:20a410c7-0476-4e62-9ee1-5fb6998f308f nodeName:}" failed. No retries permitted until 2026-01-29 08:41:28.355908235 +0000 UTC m=+168.855496197 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs") pod "network-metrics-daemon-wnmhx" (UID: "20a410c7-0476-4e62-9ee1-5fb6998f308f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:40:25 crc kubenswrapper[5031]: I0129 08:40:25.281458 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:25 crc kubenswrapper[5031]: I0129 08:40:25.281547 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:25 crc kubenswrapper[5031]: E0129 08:40:25.281572 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:25 crc kubenswrapper[5031]: I0129 08:40:25.281459 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:25 crc kubenswrapper[5031]: I0129 08:40:25.281797 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:25 crc kubenswrapper[5031]: E0129 08:40:25.281790 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:25 crc kubenswrapper[5031]: E0129 08:40:25.281891 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:25 crc kubenswrapper[5031]: E0129 08:40:25.281957 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:27 crc kubenswrapper[5031]: I0129 08:40:27.281687 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:27 crc kubenswrapper[5031]: I0129 08:40:27.281725 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:27 crc kubenswrapper[5031]: I0129 08:40:27.281900 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:27 crc kubenswrapper[5031]: E0129 08:40:27.282018 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:27 crc kubenswrapper[5031]: I0129 08:40:27.282067 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:27 crc kubenswrapper[5031]: E0129 08:40:27.282295 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:27 crc kubenswrapper[5031]: E0129 08:40:27.282389 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:27 crc kubenswrapper[5031]: E0129 08:40:27.282455 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:29 crc kubenswrapper[5031]: I0129 08:40:29.282317 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:29 crc kubenswrapper[5031]: I0129 08:40:29.282614 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:29 crc kubenswrapper[5031]: E0129 08:40:29.282647 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:29 crc kubenswrapper[5031]: I0129 08:40:29.282881 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:29 crc kubenswrapper[5031]: E0129 08:40:29.282878 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:29 crc kubenswrapper[5031]: E0129 08:40:29.282932 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:29 crc kubenswrapper[5031]: I0129 08:40:29.282964 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:29 crc kubenswrapper[5031]: E0129 08:40:29.283010 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:31 crc kubenswrapper[5031]: I0129 08:40:31.281702 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:31 crc kubenswrapper[5031]: I0129 08:40:31.281871 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:31 crc kubenswrapper[5031]: E0129 08:40:31.281975 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:31 crc kubenswrapper[5031]: I0129 08:40:31.282038 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:31 crc kubenswrapper[5031]: I0129 08:40:31.282161 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:31 crc kubenswrapper[5031]: E0129 08:40:31.282332 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:31 crc kubenswrapper[5031]: E0129 08:40:31.282641 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:31 crc kubenswrapper[5031]: E0129 08:40:31.282699 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:33 crc kubenswrapper[5031]: I0129 08:40:33.281898 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:33 crc kubenswrapper[5031]: I0129 08:40:33.281901 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:33 crc kubenswrapper[5031]: I0129 08:40:33.281921 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:33 crc kubenswrapper[5031]: I0129 08:40:33.281956 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:33 crc kubenswrapper[5031]: E0129 08:40:33.282138 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:33 crc kubenswrapper[5031]: E0129 08:40:33.282195 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:33 crc kubenswrapper[5031]: E0129 08:40:33.282245 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:33 crc kubenswrapper[5031]: E0129 08:40:33.282289 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:34 crc kubenswrapper[5031]: I0129 08:40:34.282786 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:40:34 crc kubenswrapper[5031]: E0129 08:40:34.282939 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f7pds_openshift-ovn-kubernetes(2afca9b4-a79c-40db-8c5f-0369e09228b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" Jan 29 08:40:35 crc kubenswrapper[5031]: I0129 08:40:35.282222 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:35 crc kubenswrapper[5031]: I0129 08:40:35.282254 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:35 crc kubenswrapper[5031]: I0129 08:40:35.282294 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:35 crc kubenswrapper[5031]: I0129 08:40:35.282269 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:35 crc kubenswrapper[5031]: E0129 08:40:35.282402 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:35 crc kubenswrapper[5031]: E0129 08:40:35.282598 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:35 crc kubenswrapper[5031]: E0129 08:40:35.282624 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:35 crc kubenswrapper[5031]: E0129 08:40:35.282793 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:37 crc kubenswrapper[5031]: I0129 08:40:37.281585 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:37 crc kubenswrapper[5031]: I0129 08:40:37.281578 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:37 crc kubenswrapper[5031]: I0129 08:40:37.281637 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:37 crc kubenswrapper[5031]: I0129 08:40:37.281742 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:37 crc kubenswrapper[5031]: E0129 08:40:37.281923 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:37 crc kubenswrapper[5031]: E0129 08:40:37.282054 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:37 crc kubenswrapper[5031]: E0129 08:40:37.282160 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:37 crc kubenswrapper[5031]: E0129 08:40:37.282280 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:39 crc kubenswrapper[5031]: I0129 08:40:39.281845 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:39 crc kubenswrapper[5031]: I0129 08:40:39.281895 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:39 crc kubenswrapper[5031]: I0129 08:40:39.281866 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:39 crc kubenswrapper[5031]: I0129 08:40:39.281845 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:39 crc kubenswrapper[5031]: E0129 08:40:39.281972 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:39 crc kubenswrapper[5031]: E0129 08:40:39.282028 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:39 crc kubenswrapper[5031]: E0129 08:40:39.282106 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:39 crc kubenswrapper[5031]: E0129 08:40:39.282209 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:40 crc kubenswrapper[5031]: I0129 08:40:40.022110 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/1.log" Jan 29 08:40:40 crc kubenswrapper[5031]: I0129 08:40:40.022524 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/0.log" Jan 29 08:40:40 crc kubenswrapper[5031]: I0129 08:40:40.022560 5031 generic.go:334] "Generic (PLEG): container finished" podID="e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad" containerID="d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6" exitCode=1 Jan 29 08:40:40 crc kubenswrapper[5031]: I0129 08:40:40.022593 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerDied","Data":"d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6"} Jan 29 08:40:40 crc kubenswrapper[5031]: I0129 08:40:40.022630 5031 scope.go:117] "RemoveContainer" containerID="58eee62dd0e6fd920c5b8738f737d93656029ae95a2dade006ae17a289f7b558" Jan 29 08:40:40 crc kubenswrapper[5031]: I0129 08:40:40.023750 5031 scope.go:117] "RemoveContainer" containerID="d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6" Jan 29 08:40:40 crc kubenswrapper[5031]: E0129 08:40:40.024579 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ghc5v_openshift-multus(e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad)\"" pod="openshift-multus/multus-ghc5v" podUID="e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad" Jan 29 08:40:40 crc kubenswrapper[5031]: I0129 08:40:40.045095 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x8rgq" podStartSLOduration=94.045078665 podStartE2EDuration="1m34.045078665s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:18.976532733 +0000 UTC m=+99.476120745" watchObservedRunningTime="2026-01-29 08:40:40.045078665 +0000 UTC m=+120.544666617" Jan 29 08:40:40 crc kubenswrapper[5031]: E0129 08:40:40.244047 5031 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 08:40:40 crc kubenswrapper[5031]: E0129 08:40:40.389441 5031 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:40:41 crc kubenswrapper[5031]: I0129 08:40:41.027933 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/1.log" Jan 29 08:40:41 crc kubenswrapper[5031]: I0129 08:40:41.281982 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:41 crc kubenswrapper[5031]: I0129 08:40:41.282081 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:41 crc kubenswrapper[5031]: E0129 08:40:41.282127 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:41 crc kubenswrapper[5031]: I0129 08:40:41.282137 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:41 crc kubenswrapper[5031]: I0129 08:40:41.282143 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:41 crc kubenswrapper[5031]: E0129 08:40:41.282234 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:41 crc kubenswrapper[5031]: E0129 08:40:41.282326 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:41 crc kubenswrapper[5031]: E0129 08:40:41.282413 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:43 crc kubenswrapper[5031]: I0129 08:40:43.282653 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:43 crc kubenswrapper[5031]: E0129 08:40:43.282795 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:43 crc kubenswrapper[5031]: I0129 08:40:43.283029 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:43 crc kubenswrapper[5031]: E0129 08:40:43.283090 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:43 crc kubenswrapper[5031]: I0129 08:40:43.283225 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:43 crc kubenswrapper[5031]: E0129 08:40:43.283279 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:43 crc kubenswrapper[5031]: I0129 08:40:43.283437 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:43 crc kubenswrapper[5031]: E0129 08:40:43.283499 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:45 crc kubenswrapper[5031]: I0129 08:40:45.282000 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:45 crc kubenswrapper[5031]: I0129 08:40:45.282050 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:45 crc kubenswrapper[5031]: E0129 08:40:45.282772 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:45 crc kubenswrapper[5031]: I0129 08:40:45.282067 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:45 crc kubenswrapper[5031]: E0129 08:40:45.282878 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:45 crc kubenswrapper[5031]: I0129 08:40:45.282068 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:45 crc kubenswrapper[5031]: E0129 08:40:45.283195 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:45 crc kubenswrapper[5031]: E0129 08:40:45.283122 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:45 crc kubenswrapper[5031]: E0129 08:40:45.390667 5031 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:40:47 crc kubenswrapper[5031]: I0129 08:40:47.282104 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:47 crc kubenswrapper[5031]: E0129 08:40:47.282312 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:47 crc kubenswrapper[5031]: I0129 08:40:47.282617 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:47 crc kubenswrapper[5031]: I0129 08:40:47.282652 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:47 crc kubenswrapper[5031]: E0129 08:40:47.282695 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:47 crc kubenswrapper[5031]: I0129 08:40:47.282756 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:47 crc kubenswrapper[5031]: E0129 08:40:47.282849 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:47 crc kubenswrapper[5031]: E0129 08:40:47.283133 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:48 crc kubenswrapper[5031]: I0129 08:40:48.283594 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.053931 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/3.log" Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.056642 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerStarted","Data":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.057125 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.082766 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podStartSLOduration=103.082746685 podStartE2EDuration="1m43.082746685s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:40:49.080995485 +0000 UTC m=+129.580583447" watchObservedRunningTime="2026-01-29 08:40:49.082746685 +0000 UTC m=+129.582334647" Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.256394 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wnmhx"] Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.256671 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:49 crc kubenswrapper[5031]: E0129 08:40:49.256836 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.282347 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.282451 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:49 crc kubenswrapper[5031]: E0129 08:40:49.282508 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:49 crc kubenswrapper[5031]: I0129 08:40:49.282531 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:49 crc kubenswrapper[5031]: E0129 08:40:49.282636 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:49 crc kubenswrapper[5031]: E0129 08:40:49.282750 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:50 crc kubenswrapper[5031]: E0129 08:40:50.391500 5031 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:40:51 crc kubenswrapper[5031]: I0129 08:40:51.282410 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:51 crc kubenswrapper[5031]: I0129 08:40:51.282438 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:51 crc kubenswrapper[5031]: I0129 08:40:51.282491 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:51 crc kubenswrapper[5031]: I0129 08:40:51.282410 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:51 crc kubenswrapper[5031]: E0129 08:40:51.282553 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:51 crc kubenswrapper[5031]: E0129 08:40:51.282615 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:51 crc kubenswrapper[5031]: E0129 08:40:51.282741 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:51 crc kubenswrapper[5031]: E0129 08:40:51.282852 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:53 crc kubenswrapper[5031]: I0129 08:40:53.282011 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:53 crc kubenswrapper[5031]: E0129 08:40:53.282238 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:53 crc kubenswrapper[5031]: I0129 08:40:53.282526 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:53 crc kubenswrapper[5031]: E0129 08:40:53.282606 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:53 crc kubenswrapper[5031]: I0129 08:40:53.282749 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:53 crc kubenswrapper[5031]: E0129 08:40:53.282832 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:53 crc kubenswrapper[5031]: I0129 08:40:53.283006 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:53 crc kubenswrapper[5031]: E0129 08:40:53.283087 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:55 crc kubenswrapper[5031]: I0129 08:40:55.282127 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:55 crc kubenswrapper[5031]: I0129 08:40:55.282206 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:55 crc kubenswrapper[5031]: E0129 08:40:55.282270 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:55 crc kubenswrapper[5031]: I0129 08:40:55.282352 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:55 crc kubenswrapper[5031]: E0129 08:40:55.282403 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:55 crc kubenswrapper[5031]: E0129 08:40:55.282626 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:55 crc kubenswrapper[5031]: I0129 08:40:55.282706 5031 scope.go:117] "RemoveContainer" containerID="d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6" Jan 29 08:40:55 crc kubenswrapper[5031]: I0129 08:40:55.282894 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:55 crc kubenswrapper[5031]: E0129 08:40:55.283132 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:55 crc kubenswrapper[5031]: E0129 08:40:55.393678 5031 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:40:56 crc kubenswrapper[5031]: I0129 08:40:56.084912 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/1.log" Jan 29 08:40:56 crc kubenswrapper[5031]: I0129 08:40:56.084983 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerStarted","Data":"36a3e18c8bf74378ac5216bc97095f9be8985c97e82e42362c7bcc0b1857c92e"} Jan 29 08:40:57 crc kubenswrapper[5031]: I0129 08:40:57.282112 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:57 crc kubenswrapper[5031]: E0129 08:40:57.282574 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:57 crc kubenswrapper[5031]: I0129 08:40:57.282588 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:57 crc kubenswrapper[5031]: I0129 08:40:57.282610 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:57 crc kubenswrapper[5031]: I0129 08:40:57.282588 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:57 crc kubenswrapper[5031]: E0129 08:40:57.282739 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:57 crc kubenswrapper[5031]: E0129 08:40:57.282663 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:40:57 crc kubenswrapper[5031]: E0129 08:40:57.282871 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:59 crc kubenswrapper[5031]: I0129 08:40:59.281476 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:40:59 crc kubenswrapper[5031]: I0129 08:40:59.281528 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:40:59 crc kubenswrapper[5031]: E0129 08:40:59.281645 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:40:59 crc kubenswrapper[5031]: I0129 08:40:59.281720 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:40:59 crc kubenswrapper[5031]: E0129 08:40:59.281843 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:40:59 crc kubenswrapper[5031]: I0129 08:40:59.281873 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:40:59 crc kubenswrapper[5031]: E0129 08:40:59.282265 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wnmhx" podUID="20a410c7-0476-4e62-9ee1-5fb6998f308f" Jan 29 08:40:59 crc kubenswrapper[5031]: E0129 08:40:59.282303 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.281904 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.281948 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.281917 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.281913 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.284664 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.284730 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.284733 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.284771 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.284664 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 08:41:01 crc kubenswrapper[5031]: I0129 08:41:01.284774 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.303947 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:08 crc kubenswrapper[5031]: E0129 08:41:08.304060 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:10.304037482 +0000 UTC m=+270.803625434 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.304117 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.304153 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.304182 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.304219 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.305161 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.309273 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.309337 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.310723 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.493840 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.493905 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.498005 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.513195 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.520837 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:08 crc kubenswrapper[5031]: W0129 08:41:08.723148 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b193f3d384a4a6b36949300e1adfae7347a1d43d3829c643a1ebcf48a0b2e702 WatchSource:0}: Error finding container b193f3d384a4a6b36949300e1adfae7347a1d43d3829c643a1ebcf48a0b2e702: Status 404 returned error can't find the container with id b193f3d384a4a6b36949300e1adfae7347a1d43d3829c643a1ebcf48a0b2e702 Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.903806 5031 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.935580 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.936181 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.936256 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.936864 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.938046 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.941175 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.945051 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fmrqw"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.945420 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.945510 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.945590 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.945738 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.945787 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.945896 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.946053 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.947566 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.948240 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.948925 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.949354 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.950343 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.950891 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.951063 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.951199 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.951338 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.951504 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.951654 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.951876 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.952023 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.952197 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.952557 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rjzm6"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.952980 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-w2sql"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.953414 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.953430 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jx726"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.953964 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.953438 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.954180 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.954579 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.955920 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gf74n"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.956297 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.956712 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9m279"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.957048 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.957973 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-lbjm4"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.958317 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959137 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959189 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959237 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959278 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959287 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959305 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959245 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959356 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sp9n7"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.959957 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.961287 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.970777 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.971041 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.971408 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.973067 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.973282 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.973352 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.973456 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.973468 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.973566 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.973765 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.974332 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.974388 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.974542 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.974644 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.974732 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.974880 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975041 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975145 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975213 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975622 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975664 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975712 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975816 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975899 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.975955 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976004 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976085 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976182 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976282 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976394 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976410 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976712 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976758 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.976972 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.977082 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.977155 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.977181 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.977278 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.977353 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98"] Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.977881 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.978074 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.978214 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.978343 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.978491 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.978624 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.978810 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.978949 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979058 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979140 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979217 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979306 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979355 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979476 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979499 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979534 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979591 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979642 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979704 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979771 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979799 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979880 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979970 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.979990 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.980054 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.980158 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.980311 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.980910 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 08:41:08 crc kubenswrapper[5031]: I0129 08:41:08.981085 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.010880 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.031805 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv82k\" (UniqueName: \"kubernetes.io/projected/d9509194-11a3-49da-89be-f1c25b0b4268-kube-api-access-bv82k\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.031892 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-client-ca\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032055 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9509194-11a3-49da-89be-f1c25b0b4268-auth-proxy-config\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032097 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d9509194-11a3-49da-89be-f1c25b0b4268-machine-approver-tls\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032120 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/854630f2-aa2a-4626-a201-a65f7ea05a9a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kvh98\" (UID: \"854630f2-aa2a-4626-a201-a65f7ea05a9a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032136 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sblkd\" (UniqueName: \"kubernetes.io/projected/55a0f308-38a2-4bcf-b125-d7c0fa28f036-kube-api-access-sblkd\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032158 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9509194-11a3-49da-89be-f1c25b0b4268-config\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032213 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-config\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032247 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prcg\" (UniqueName: \"kubernetes.io/projected/854630f2-aa2a-4626-a201-a65f7ea05a9a-kube-api-access-8prcg\") pod \"cluster-samples-operator-665b6dd947-kvh98\" (UID: \"854630f2-aa2a-4626-a201-a65f7ea05a9a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.032273 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a0f308-38a2-4bcf-b125-d7c0fa28f036-serving-cert\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.033445 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.034499 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vvvr9"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.037337 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.038062 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.038294 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.038333 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.040087 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7gxmb"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.040712 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll2lx"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.041434 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.041907 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.042218 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.043006 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.043937 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.044129 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.045938 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.046916 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4v677"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.047517 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.054644 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.057873 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.059155 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.061516 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.064112 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.064132 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.064758 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.071793 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.072063 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.072581 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.073994 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.074552 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.074695 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.077950 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p598l"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.081880 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.082303 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.087287 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.103147 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.107436 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.114469 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z4ldg"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.115000 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.115407 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.115976 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.116342 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r78xm"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.116740 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.117221 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.117546 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.117518 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.117486 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.118257 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.121148 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.121260 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.121334 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.121540 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.122552 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.122962 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zbnth"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.123301 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.123848 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.124124 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.125227 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ab8a5493b750843f0592d9165118cb4abf3f6b6527a4aa380aa9f16ed1eac88f"} Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.125410 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"96837d3ddc2e9f2fc6ce2d8595013521fde2660eb68a34ee1b86198f6ab90024"} Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.130916 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4g95m"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133076 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbvl\" (UniqueName: \"kubernetes.io/projected/f07acf69-4876-413e-b098-b7074c7018c2-kube-api-access-6dbvl\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133121 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133220 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-serving-cert\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133241 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mskvg\" (UniqueName: \"kubernetes.io/projected/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-kube-api-access-mskvg\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133263 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl4xx\" (UniqueName: \"kubernetes.io/projected/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-kube-api-access-fl4xx\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133283 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3bbd5e-4071-4761-b455-e830e12dfa81-config\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133303 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133322 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133342 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1220954-121b-495d-b2e2-0bb75ce20ca8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133384 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9509194-11a3-49da-89be-f1c25b0b4268-auth-proxy-config\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133406 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d9509194-11a3-49da-89be-f1c25b0b4268-machine-approver-tls\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133429 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133451 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-encryption-config\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133471 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4731ec6c-5138-422c-8591-fc405d201db7-serving-cert\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133492 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ad39856-53d1-4f86-9ebe-9477b4cd4106-serving-cert\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133519 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sblkd\" (UniqueName: \"kubernetes.io/projected/55a0f308-38a2-4bcf-b125-d7c0fa28f036-kube-api-access-sblkd\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133542 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39b067-ab63-48ca-9930-631d73d2811c-config\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133561 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-serving-cert\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133582 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9bw\" (UniqueName: \"kubernetes.io/projected/db9f8ea0-be69-4991-801b-4dea935a10b0-kube-api-access-qp9bw\") pod \"dns-operator-744455d44c-vvvr9\" (UID: \"db9f8ea0-be69-4991-801b-4dea935a10b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133602 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a3bbd5e-4071-4761-b455-e830e12dfa81-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133658 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-serving-cert\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133683 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-service-ca-bundle\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133707 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/107ea484-2b37-42f5-a7d8-f844fa231948-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133732 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-ca\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133752 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db9f8ea0-be69-4991-801b-4dea935a10b0-metrics-tls\") pod \"dns-operator-744455d44c-vvvr9\" (UID: \"db9f8ea0-be69-4991-801b-4dea935a10b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133774 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133794 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a3bbd5e-4071-4761-b455-e830e12dfa81-images\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133814 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d371617-7dd8-407f-b233-73ec3cd483e2-node-pullsecrets\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133834 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6kj6\" (UniqueName: \"kubernetes.io/projected/0d371617-7dd8-407f-b233-73ec3cd483e2-kube-api-access-q6kj6\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133855 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-config\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133868 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-oauth-config\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133882 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svn5z\" (UniqueName: \"kubernetes.io/projected/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-kube-api-access-svn5z\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133897 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-console-config\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133913 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1220954-121b-495d-b2e2-0bb75ce20ca8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133927 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133942 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnhkm\" (UniqueName: \"kubernetes.io/projected/8a3bbd5e-4071-4761-b455-e830e12dfa81-kube-api-access-xnhkm\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133956 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-etcd-client\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133972 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-config\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.133988 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-trusted-ca-bundle\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134063 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-client\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134219 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8txw\" (UniqueName: \"kubernetes.io/projected/4731ec6c-5138-422c-8591-fc405d201db7-kube-api-access-c8txw\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134239 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-config\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134271 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-etcd-client\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134287 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134301 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-audit-policies\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134315 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134331 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134347 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107ea484-2b37-42f5-a7d8-f844fa231948-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134376 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzl9\" (UniqueName: \"kubernetes.io/projected/8ad39856-53d1-4f86-9ebe-9477b4cd4106-kube-api-access-gfzl9\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134378 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9509194-11a3-49da-89be-f1c25b0b4268-auth-proxy-config\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134392 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-serving-cert\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134768 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpd58\" (UniqueName: \"kubernetes.io/projected/5f4e6cea-65e3-446f-9925-d63d00fc235f-kube-api-access-lpd58\") pod \"downloads-7954f5f757-sp9n7\" (UID: \"5f4e6cea-65e3-446f-9925-d63d00fc235f\") " pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134789 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4731ec6c-5138-422c-8591-fc405d201db7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134804 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-encryption-config\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134819 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-config\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134854 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-service-ca\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134872 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134898 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-client-ca\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134913 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be39b067-ab63-48ca-9930-631d73d2811c-trusted-ca\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134927 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-audit\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.134941 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-audit-dir\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135154 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-serving-cert\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135277 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135333 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6fc\" (UniqueName: \"kubernetes.io/projected/107ea484-2b37-42f5-a7d8-f844fa231948-kube-api-access-vj6fc\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135409 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135446 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-config\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135488 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/854630f2-aa2a-4626-a201-a65f7ea05a9a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kvh98\" (UID: \"854630f2-aa2a-4626-a201-a65f7ea05a9a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135535 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39b067-ab63-48ca-9930-631d73d2811c-serving-cert\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135621 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135654 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135699 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9509194-11a3-49da-89be-f1c25b0b4268-config\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135736 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-image-import-ca\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135764 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135879 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-client-ca\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135960 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmt8\" (UniqueName: \"kubernetes.io/projected/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-kube-api-access-6vmt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.135989 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-policies\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136010 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-dir\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136029 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136114 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mjtm\" (UniqueName: \"kubernetes.io/projected/9e7bbdcb-3270-42af-bda0-e6bebab732a2-kube-api-access-9mjtm\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136147 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9509194-11a3-49da-89be-f1c25b0b4268-config\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136150 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/107ea484-2b37-42f5-a7d8-f844fa231948-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136231 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-service-ca\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136259 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8prcg\" (UniqueName: \"kubernetes.io/projected/854630f2-aa2a-4626-a201-a65f7ea05a9a-kube-api-access-8prcg\") pod \"cluster-samples-operator-665b6dd947-kvh98\" (UID: \"854630f2-aa2a-4626-a201-a65f7ea05a9a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136286 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a0f308-38a2-4bcf-b125-d7c0fa28f036-serving-cert\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136306 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-etcd-serving-ca\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136472 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d371617-7dd8-407f-b233-73ec3cd483e2-audit-dir\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136546 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-oauth-serving-cert\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136568 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bg9m\" (UniqueName: \"kubernetes.io/projected/be39b067-ab63-48ca-9930-631d73d2811c-kube-api-access-4bg9m\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136583 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-client-ca\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136597 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136614 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnwtz\" (UniqueName: \"kubernetes.io/projected/e1220954-121b-495d-b2e2-0bb75ce20ca8-kube-api-access-fnwtz\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136627 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.136646 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv82k\" (UniqueName: \"kubernetes.io/projected/d9509194-11a3-49da-89be-f1c25b0b4268-kube-api-access-bv82k\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.137061 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-config\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.137434 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9b2b99735a9e1274cb535ecaac98cdd29578542340b8bc735d04a05da6302d72"} Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.137466 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b7ce6afacd2667b6f6cabf38703731a789dc352c970f5b51ba9dff9b356a87bb"} Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.137481 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7e073e143df0d46e3c78f73e7575ec5b26d2aab53557d038b7a68098dba8b363"} Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.137493 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b193f3d384a4a6b36949300e1adfae7347a1d43d3829c643a1ebcf48a0b2e702"} Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.137506 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xnzz7"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.138153 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.138331 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.138426 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.139014 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.139145 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/854630f2-aa2a-4626-a201-a65f7ea05a9a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kvh98\" (UID: \"854630f2-aa2a-4626-a201-a65f7ea05a9a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.141135 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.141633 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a0f308-38a2-4bcf-b125-d7c0fa28f036-serving-cert\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.141694 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.142179 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.142742 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.142769 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d9509194-11a3-49da-89be-f1c25b0b4268-machine-approver-tls\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.147451 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.148072 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.158435 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.159988 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fmrqw"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.163034 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.165182 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.167556 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jx726"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.174081 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rjzm6"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.177442 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sp9n7"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.178983 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.179521 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9m279"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.182766 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vvvr9"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.184806 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.186692 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7gxmb"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.187904 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.189122 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z4ldg"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.191268 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-w2sql"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.193865 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.195279 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.198099 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.199439 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gf74n"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.200596 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.201603 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.202876 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p598l"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.203984 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lbjm4"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.205153 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r78xm"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.206222 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll2lx"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.207191 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.208098 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.209280 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.211914 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.214876 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.218276 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.219569 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qwhkt"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.220849 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.221890 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tsvjs"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.223196 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.223254 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.224501 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zbnth"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.225814 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.227430 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.228629 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4g95m"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.230011 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.231807 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tsvjs"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.232946 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qwhkt"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.234002 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-54g8c"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.234528 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.235140 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-54g8c"] Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.237407 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1220954-121b-495d-b2e2-0bb75ce20ca8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.237759 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.237847 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnhkm\" (UniqueName: \"kubernetes.io/projected/8a3bbd5e-4071-4761-b455-e830e12dfa81-kube-api-access-xnhkm\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.237927 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-etcd-client\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238035 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-trusted-ca-bundle\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238113 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-client\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238185 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-config\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238233 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238323 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8txw\" (UniqueName: \"kubernetes.io/projected/4731ec6c-5138-422c-8591-fc405d201db7-kube-api-access-c8txw\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238432 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-etcd-client\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238553 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238661 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238738 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238810 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107ea484-2b37-42f5-a7d8-f844fa231948-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238894 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-audit-policies\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238975 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-serving-cert\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239043 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzl9\" (UniqueName: \"kubernetes.io/projected/8ad39856-53d1-4f86-9ebe-9477b4cd4106-kube-api-access-gfzl9\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239121 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpd58\" (UniqueName: \"kubernetes.io/projected/5f4e6cea-65e3-446f-9925-d63d00fc235f-kube-api-access-lpd58\") pod \"downloads-7954f5f757-sp9n7\" (UID: \"5f4e6cea-65e3-446f-9925-d63d00fc235f\") " pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239197 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-encryption-config\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239297 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-config\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239421 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-service-ca\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239515 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4731ec6c-5138-422c-8591-fc405d201db7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239590 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239674 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be39b067-ab63-48ca-9930-631d73d2811c-trusted-ca\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239746 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-audit\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239816 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-audit-dir\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239890 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-serving-cert\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239957 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240030 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj6fc\" (UniqueName: \"kubernetes.io/projected/107ea484-2b37-42f5-a7d8-f844fa231948-kube-api-access-vj6fc\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240102 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-config\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240173 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.241438 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107ea484-2b37-42f5-a7d8-f844fa231948-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.239613 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-audit-policies\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240004 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240704 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-audit\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240770 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-config\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240875 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-audit-dir\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.238896 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1220954-121b-495d-b2e2-0bb75ce20ca8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.241114 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-service-ca\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.241131 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4731ec6c-5138-422c-8591-fc405d201db7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.241149 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.241628 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.240196 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-trusted-ca-bundle\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.241691 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.241917 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be39b067-ab63-48ca-9930-631d73d2811c-trusted-ca\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242034 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-serving-cert\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242042 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39b067-ab63-48ca-9930-631d73d2811c-serving-cert\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242110 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242144 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242179 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-image-import-ca\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242204 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242236 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vmt8\" (UniqueName: \"kubernetes.io/projected/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-kube-api-access-6vmt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242261 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-policies\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242285 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-dir\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242317 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242380 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mjtm\" (UniqueName: \"kubernetes.io/projected/9e7bbdcb-3270-42af-bda0-e6bebab732a2-kube-api-access-9mjtm\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242408 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/107ea484-2b37-42f5-a7d8-f844fa231948-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242437 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-service-ca\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242464 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-etcd-serving-ca\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242487 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d371617-7dd8-407f-b233-73ec3cd483e2-audit-dir\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242508 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-oauth-serving-cert\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242541 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bg9m\" (UniqueName: \"kubernetes.io/projected/be39b067-ab63-48ca-9930-631d73d2811c-kube-api-access-4bg9m\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242564 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242590 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-client-ca\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242618 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242653 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnwtz\" (UniqueName: \"kubernetes.io/projected/e1220954-121b-495d-b2e2-0bb75ce20ca8-kube-api-access-fnwtz\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242682 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242705 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-serving-cert\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242729 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mskvg\" (UniqueName: \"kubernetes.io/projected/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-kube-api-access-mskvg\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242756 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dbvl\" (UniqueName: \"kubernetes.io/projected/f07acf69-4876-413e-b098-b7074c7018c2-kube-api-access-6dbvl\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242782 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl4xx\" (UniqueName: \"kubernetes.io/projected/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-kube-api-access-fl4xx\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242806 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3bbd5e-4071-4761-b455-e830e12dfa81-config\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242835 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242861 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1220954-121b-495d-b2e2-0bb75ce20ca8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242887 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242892 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242912 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242936 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242938 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-encryption-config\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242988 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4731ec6c-5138-422c-8591-fc405d201db7-serving-cert\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243013 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ad39856-53d1-4f86-9ebe-9477b4cd4106-serving-cert\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243039 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39b067-ab63-48ca-9930-631d73d2811c-config\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243055 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-serving-cert\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243076 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a3bbd5e-4071-4761-b455-e830e12dfa81-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243093 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-serving-cert\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243108 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-service-ca-bundle\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243126 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/107ea484-2b37-42f5-a7d8-f844fa231948-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243143 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-ca\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243159 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db9f8ea0-be69-4991-801b-4dea935a10b0-metrics-tls\") pod \"dns-operator-744455d44c-vvvr9\" (UID: \"db9f8ea0-be69-4991-801b-4dea935a10b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243174 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp9bw\" (UniqueName: \"kubernetes.io/projected/db9f8ea0-be69-4991-801b-4dea935a10b0-kube-api-access-qp9bw\") pod \"dns-operator-744455d44c-vvvr9\" (UID: \"db9f8ea0-be69-4991-801b-4dea935a10b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243189 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a3bbd5e-4071-4761-b455-e830e12dfa81-images\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243204 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d371617-7dd8-407f-b233-73ec3cd483e2-node-pullsecrets\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243220 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6kj6\" (UniqueName: \"kubernetes.io/projected/0d371617-7dd8-407f-b233-73ec3cd483e2-kube-api-access-q6kj6\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243236 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243254 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-config\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243273 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-oauth-config\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243292 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-console-config\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243308 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svn5z\" (UniqueName: \"kubernetes.io/projected/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-kube-api-access-svn5z\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243387 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243433 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243553 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.243842 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-dir\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.242586 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-etcd-client\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.244326 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d371617-7dd8-407f-b233-73ec3cd483e2-node-pullsecrets\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.244637 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d371617-7dd8-407f-b233-73ec3cd483e2-audit-dir\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.245201 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-image-import-ca\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.245381 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-serving-cert\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.246160 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-encryption-config\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.246696 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.246762 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a3bbd5e-4071-4761-b455-e830e12dfa81-images\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.246834 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.246833 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3bbd5e-4071-4761-b455-e830e12dfa81-config\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.247135 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-config\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.247377 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-etcd-serving-ca\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.247404 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-etcd-client\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.247412 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-oauth-serving-cert\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.247493 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-service-ca-bundle\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.248060 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-client-ca\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.248716 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.248784 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-config\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.248895 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39b067-ab63-48ca-9930-631d73d2811c-config\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.248911 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d371617-7dd8-407f-b233-73ec3cd483e2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249171 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249231 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249345 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a3bbd5e-4071-4761-b455-e830e12dfa81-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249458 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-console-config\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249539 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4731ec6c-5138-422c-8591-fc405d201db7-serving-cert\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249692 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-serving-cert\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249819 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-serving-cert\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249930 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.249967 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1220954-121b-495d-b2e2-0bb75ce20ca8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.250020 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-oauth-config\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.250290 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db9f8ea0-be69-4991-801b-4dea935a10b0-metrics-tls\") pod \"dns-operator-744455d44c-vvvr9\" (UID: \"db9f8ea0-be69-4991-801b-4dea935a10b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.250290 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-policies\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.250771 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-serving-cert\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.250896 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39b067-ab63-48ca-9930-631d73d2811c-serving-cert\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.250945 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d371617-7dd8-407f-b233-73ec3cd483e2-encryption-config\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.251244 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/107ea484-2b37-42f5-a7d8-f844fa231948-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.251293 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.253915 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.258125 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.278684 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.286498 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ad39856-53d1-4f86-9ebe-9477b4cd4106-serving-cert\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.299068 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.311324 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-client\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.319801 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.339665 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.354056 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-config\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.359660 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.365802 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-ca\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.379612 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.385499 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8ad39856-53d1-4f86-9ebe-9477b4cd4106-etcd-service-ca\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.419315 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.438345 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.460553 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.479901 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.499745 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.519538 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.539592 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.559122 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.580334 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.609690 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.618834 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.638221 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.666384 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.678439 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.699639 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.720720 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.740724 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.758731 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.779779 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.799347 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.819759 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.839584 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.859170 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.899394 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.918819 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.939386 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.959227 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.979299 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 08:41:09 crc kubenswrapper[5031]: I0129 08:41:09.999558 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.019152 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.038788 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.059818 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.079524 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.098967 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.118484 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.136801 5031 request.go:700] Waited for 1.014501022s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.138552 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.159108 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.179388 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.198936 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.218018 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.247018 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.258060 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.279119 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.298451 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.319686 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.338713 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.359114 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.379606 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.398189 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.418626 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.438616 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.459101 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.479010 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.499199 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.518249 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.538911 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.580825 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sblkd\" (UniqueName: \"kubernetes.io/projected/55a0f308-38a2-4bcf-b125-d7c0fa28f036-kube-api-access-sblkd\") pod \"route-controller-manager-6576b87f9c-cbgdv\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.601169 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8prcg\" (UniqueName: \"kubernetes.io/projected/854630f2-aa2a-4626-a201-a65f7ea05a9a-kube-api-access-8prcg\") pod \"cluster-samples-operator-665b6dd947-kvh98\" (UID: \"854630f2-aa2a-4626-a201-a65f7ea05a9a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.612903 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv82k\" (UniqueName: \"kubernetes.io/projected/d9509194-11a3-49da-89be-f1c25b0b4268-kube-api-access-bv82k\") pod \"machine-approver-56656f9798-8mp2n\" (UID: \"d9509194-11a3-49da-89be-f1c25b0b4268\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.619144 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.638712 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.658994 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.678721 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.699103 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.718055 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.738750 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.758610 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.763767 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.778519 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.779850 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 08:41:10 crc kubenswrapper[5031]: W0129 08:41:10.793574 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9509194_11a3_49da_89be_f1c25b0b4268.slice/crio-863b973f87679303dc1e4392c95b04549558d2472ce100e943f2a350905aeed7 WatchSource:0}: Error finding container 863b973f87679303dc1e4392c95b04549558d2472ce100e943f2a350905aeed7: Status 404 returned error can't find the container with id 863b973f87679303dc1e4392c95b04549558d2472ce100e943f2a350905aeed7 Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.797845 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.818718 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.832806 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.839065 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.858754 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.879239 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.900915 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.917894 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.935302 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv"] Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.938204 5031 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 08:41:10 crc kubenswrapper[5031]: I0129 08:41:10.960107 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.002188 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.006886 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.019432 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.039617 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.062382 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.101753 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnhkm\" (UniqueName: \"kubernetes.io/projected/8a3bbd5e-4071-4761-b455-e830e12dfa81-kube-api-access-xnhkm\") pod \"machine-api-operator-5694c8668f-w2sql\" (UID: \"8a3bbd5e-4071-4761-b455-e830e12dfa81\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.106628 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.115677 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8txw\" (UniqueName: \"kubernetes.io/projected/4731ec6c-5138-422c-8591-fc405d201db7-kube-api-access-c8txw\") pod \"openshift-config-operator-7777fb866f-f8qtb\" (UID: \"4731ec6c-5138-422c-8591-fc405d201db7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.131542 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfzl9\" (UniqueName: \"kubernetes.io/projected/8ad39856-53d1-4f86-9ebe-9477b4cd4106-kube-api-access-gfzl9\") pod \"etcd-operator-b45778765-7gxmb\" (UID: \"8ad39856-53d1-4f86-9ebe-9477b4cd4106\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.140957 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" event={"ID":"d9509194-11a3-49da-89be-f1c25b0b4268","Type":"ContainerStarted","Data":"142146d36e6de4d13a4840f7ab7338629bbcb93752fe6568e395dfe527a65e4e"} Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.141014 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" event={"ID":"d9509194-11a3-49da-89be-f1c25b0b4268","Type":"ContainerStarted","Data":"863b973f87679303dc1e4392c95b04549558d2472ce100e943f2a350905aeed7"} Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.142549 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" event={"ID":"55a0f308-38a2-4bcf-b125-d7c0fa28f036","Type":"ContainerStarted","Data":"b0b793f3f52611d2d823fa6cf7d723454c3742c3f1b447fcef7332e67c479a0f"} Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.142579 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" event={"ID":"55a0f308-38a2-4bcf-b125-d7c0fa28f036","Type":"ContainerStarted","Data":"e3e821692e31ebf39b4c36b7c51949d8f1d553c8bbedb32869abd6d2d8a893fc"} Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.142902 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.145239 5031 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cbgdv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.145301 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" podUID="55a0f308-38a2-4bcf-b125-d7c0fa28f036" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.155272 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpd58\" (UniqueName: \"kubernetes.io/projected/5f4e6cea-65e3-446f-9925-d63d00fc235f-kube-api-access-lpd58\") pod \"downloads-7954f5f757-sp9n7\" (UID: \"5f4e6cea-65e3-446f-9925-d63d00fc235f\") " pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.157391 5031 request.go:700] Waited for 1.916381363s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.172839 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj6fc\" (UniqueName: \"kubernetes.io/projected/107ea484-2b37-42f5-a7d8-f844fa231948-kube-api-access-vj6fc\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.175225 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.191893 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mjtm\" (UniqueName: \"kubernetes.io/projected/9e7bbdcb-3270-42af-bda0-e6bebab732a2-kube-api-access-9mjtm\") pod \"oauth-openshift-558db77b4-rjzm6\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.212629 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnwtz\" (UniqueName: \"kubernetes.io/projected/e1220954-121b-495d-b2e2-0bb75ce20ca8-kube-api-access-fnwtz\") pod \"openshift-apiserver-operator-796bbdcf4f-ngjq9\" (UID: \"e1220954-121b-495d-b2e2-0bb75ce20ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.229658 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.237195 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svn5z\" (UniqueName: \"kubernetes.io/projected/5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961-kube-api-access-svn5z\") pod \"authentication-operator-69f744f599-9m279\" (UID: \"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.240999 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.258862 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bg9m\" (UniqueName: \"kubernetes.io/projected/be39b067-ab63-48ca-9930-631d73d2811c-kube-api-access-4bg9m\") pod \"console-operator-58897d9998-gf74n\" (UID: \"be39b067-ab63-48ca-9930-631d73d2811c\") " pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.267465 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.274657 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/107ea484-2b37-42f5-a7d8-f844fa231948-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsjh\" (UID: \"107ea484-2b37-42f5-a7d8-f844fa231948\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.280148 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.294499 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.301109 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.301505 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mskvg\" (UniqueName: \"kubernetes.io/projected/c3fd82ff-34b6-4e6c-97aa-0349b6cbf219-kube-api-access-mskvg\") pod \"apiserver-7bbb656c7d-bvrqv\" (UID: \"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.312236 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dbvl\" (UniqueName: \"kubernetes.io/projected/f07acf69-4876-413e-b098-b7074c7018c2-kube-api-access-6dbvl\") pod \"console-f9d7485db-lbjm4\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.333959 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.336050 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl4xx\" (UniqueName: \"kubernetes.io/projected/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-kube-api-access-fl4xx\") pod \"controller-manager-879f6c89f-jx726\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.341234 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.361539 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp9bw\" (UniqueName: \"kubernetes.io/projected/db9f8ea0-be69-4991-801b-4dea935a10b0-kube-api-access-qp9bw\") pod \"dns-operator-744455d44c-vvvr9\" (UID: \"db9f8ea0-be69-4991-801b-4dea935a10b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:11 crc kubenswrapper[5031]: W0129 08:41:11.366925 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4731ec6c_5138_422c_8591_fc405d201db7.slice/crio-28be63e42158eaa4fbe36a3123d4e5a8020493c147055e7c52241f8aa8d43643 WatchSource:0}: Error finding container 28be63e42158eaa4fbe36a3123d4e5a8020493c147055e7c52241f8aa8d43643: Status 404 returned error can't find the container with id 28be63e42158eaa4fbe36a3123d4e5a8020493c147055e7c52241f8aa8d43643 Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.381695 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vmt8\" (UniqueName: \"kubernetes.io/projected/3acd54ae-3c41-48d1-bb86-1ab7c36ab86f-kube-api-access-6vmt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4h5k\" (UID: \"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.400838 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6kj6\" (UniqueName: \"kubernetes.io/projected/0d371617-7dd8-407f-b233-73ec3cd483e2-kube-api-access-q6kj6\") pod \"apiserver-76f77b778f-fmrqw\" (UID: \"0d371617-7dd8-407f-b233-73ec3cd483e2\") " pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.413874 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-registry-tls\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.413917 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.414198 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:11.914187182 +0000 UTC m=+152.413775134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.468016 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.506002 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.510447 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rjzm6"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.526878 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.527083 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.027041188 +0000 UTC m=+152.526629150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527324 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7dee0d39-2211-4219-a780-bcf29f69425a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527433 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-config\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527467 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-serving-cert\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527488 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c08f8db-9c08-4b74-957a-52b0787df6c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527507 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krbwn\" (UniqueName: \"kubernetes.io/projected/dba2693e-b691-45ea-9447-95fc1da261ed-kube-api-access-krbwn\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527576 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-registration-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527603 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5452b83a-9747-46f4-9353-33496cda70b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527645 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-trusted-ca\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527669 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vwbp\" (UniqueName: \"kubernetes.io/projected/e577602e-26da-4f65-8997-38b52ae67d82-kube-api-access-4vwbp\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527715 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-bound-sa-token\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527738 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-plugins-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.527809 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnd72\" (UniqueName: \"kubernetes.io/projected/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-kube-api-access-fnd72\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528106 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7n4k\" (UniqueName: \"kubernetes.io/projected/78a97e96-7549-489a-9fc4-d71b1e01d8d5-kube-api-access-f7n4k\") pod \"ingress-canary-54g8c\" (UID: \"78a97e96-7549-489a-9fc4-d71b1e01d8d5\") " pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528148 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954eb100-eded-479c-8ed9-0af63a167bcb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528172 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e1907c2-4acb-475f-86ba-3526740ccd3a-proxy-tls\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528207 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmnw\" (UniqueName: \"kubernetes.io/projected/c1854052-ad41-4a6f-8538-2456b0008253-kube-api-access-kcmnw\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528270 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090c6677-d6d6-4904-8c95-58c33fc2cc80-config\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528295 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e17abb9f-88dc-4ed6-949e-c91bc349f478-metrics-tls\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528317 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e1907c2-4acb-475f-86ba-3526740ccd3a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528345 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-service-ca-bundle\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528438 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c7c16379-b692-4b0c-b4ea-968c97d75f6b-images\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528464 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljsj\" (UniqueName: \"kubernetes.io/projected/66c6d48a-bdee-4f5b-b0ca-da05372e1ba2-kube-api-access-8ljsj\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn9ds\" (UID: \"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528527 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dh6cs\" (UID: \"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528551 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/090c6677-d6d6-4904-8c95-58c33fc2cc80-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528627 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-trusted-ca\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528656 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldgm\" (UniqueName: \"kubernetes.io/projected/e968f93e-d3da-4072-9c98-ebf25aff6bc2-kube-api-access-wldgm\") pod \"migrator-59844c95c7-wgvnk\" (UID: \"e968f93e-d3da-4072-9c98-ebf25aff6bc2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528844 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-registry-tls\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528871 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c08f8db-9c08-4b74-957a-52b0787df6c6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.528899 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-signing-cabundle\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.529016 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954eb100-eded-479c-8ed9-0af63a167bcb-config\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.529061 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-bound-sa-token\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.529111 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7c16379-b692-4b0c-b4ea-968c97d75f6b-proxy-tls\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.529136 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4babbe7-9316-4110-8e66-193cb7ee0b2c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.529167 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/66c6d48a-bdee-4f5b-b0ca-da05372e1ba2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn9ds\" (UID: \"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.529201 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtmgb\" (UniqueName: \"kubernetes.io/projected/0e1907c2-4acb-475f-86ba-3526740ccd3a-kube-api-access-dtmgb\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.529227 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5l5r\" (UniqueName: \"kubernetes.io/projected/5452b83a-9747-46f4-9353-33496cda70b3-kube-api-access-d5l5r\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.530929 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090c6677-d6d6-4904-8c95-58c33fc2cc80-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.531005 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-socket-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.531041 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-mountpoint-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.531068 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba2693e-b691-45ea-9447-95fc1da261ed-config-volume\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.531473 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c1854052-ad41-4a6f-8538-2456b0008253-srv-cert\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.531892 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5452b83a-9747-46f4-9353-33496cda70b3-srv-cert\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.531823 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.532203 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmwl\" (UniqueName: \"kubernetes.io/projected/dd3a139e-483b-41e7-ac87-3d3a0f86a059-kube-api-access-frmwl\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.532504 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f655fd22-4714-449e-aca8-6365f02bb397-node-bootstrap-token\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.532590 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e577602e-26da-4f65-8997-38b52ae67d82-webhook-cert\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.532753 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-registry-certificates\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.532870 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-stats-auth\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.532914 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9st6\" (UniqueName: \"kubernetes.io/projected/c4babbe7-9316-4110-8e66-193cb7ee0b2c-kube-api-access-b9st6\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.533108 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9qc\" (UniqueName: \"kubernetes.io/projected/cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a-kube-api-access-jm9qc\") pod \"package-server-manager-789f6589d5-dh6cs\" (UID: \"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.533295 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-csi-data-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.533523 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.533643 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-default-certificate\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534105 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954eb100-eded-479c-8ed9-0af63a167bcb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534167 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a97e96-7549-489a-9fc4-d71b1e01d8d5-cert\") pod \"ingress-canary-54g8c\" (UID: \"78a97e96-7549-489a-9fc4-d71b1e01d8d5\") " pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534190 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e577602e-26da-4f65-8997-38b52ae67d82-apiservice-cert\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534225 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxsh\" (UniqueName: \"kubernetes.io/projected/c7c16379-b692-4b0c-b4ea-968c97d75f6b-kube-api-access-7xxsh\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534344 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5bv\" (UniqueName: \"kubernetes.io/projected/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-kube-api-access-dk5bv\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534434 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-signing-key\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534479 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c08f8db-9c08-4b74-957a-52b0787df6c6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534538 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4babbe7-9316-4110-8e66-193cb7ee0b2c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.534805 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhdb\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-kube-api-access-jmhdb\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535545 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-metrics-tls\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535596 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c1854052-ad41-4a6f-8538-2456b0008253-profile-collector-cert\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535618 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e577602e-26da-4f65-8997-38b52ae67d82-tmpfs\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535725 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17abb9f-88dc-4ed6-949e-c91bc349f478-config-volume\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535745 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdzc\" (UniqueName: \"kubernetes.io/projected/e17abb9f-88dc-4ed6-949e-c91bc349f478-kube-api-access-6gdzc\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535808 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f655fd22-4714-449e-aca8-6365f02bb397-certs\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535854 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf8b51f5-e358-44b0-874b-454aa6479a9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z4ldg\" (UID: \"bf8b51f5-e358-44b0-874b-454aa6479a9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535878 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmtz\" (UniqueName: \"kubernetes.io/projected/bf8b51f5-e358-44b0-874b-454aa6479a9e-kube-api-access-mzmtz\") pod \"multus-admission-controller-857f4d67dd-z4ldg\" (UID: \"bf8b51f5-e358-44b0-874b-454aa6479a9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535902 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmkf\" (UniqueName: \"kubernetes.io/projected/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-kube-api-access-cpmkf\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.535966 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxxm\" (UniqueName: \"kubernetes.io/projected/f655fd22-4714-449e-aca8-6365f02bb397-kube-api-access-qhxxm\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536006 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7dee0d39-2211-4219-a780-bcf29f69425a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536061 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536085 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-metrics-certs\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536132 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxq4l\" (UniqueName: \"kubernetes.io/projected/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-kube-api-access-gxq4l\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536477 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7c16379-b692-4b0c-b4ea-968c97d75f6b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536505 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rdkc\" (UniqueName: \"kubernetes.io/projected/fb9eb323-2fa1-4562-a71f-ccb3f771395b-kube-api-access-9rdkc\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536556 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536662 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-registry-tls\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.536671 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba2693e-b691-45ea-9447-95fc1da261ed-secret-volume\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.538275 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.038258682 +0000 UTC m=+152.537846634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.561874 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.589383 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.611985 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.625908 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637310 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.637486 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.137461398 +0000 UTC m=+152.637049340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637531 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-signing-cabundle\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637557 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954eb100-eded-479c-8ed9-0af63a167bcb-config\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637574 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-bound-sa-token\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637591 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7c16379-b692-4b0c-b4ea-968c97d75f6b-proxy-tls\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637608 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4babbe7-9316-4110-8e66-193cb7ee0b2c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637627 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5l5r\" (UniqueName: \"kubernetes.io/projected/5452b83a-9747-46f4-9353-33496cda70b3-kube-api-access-d5l5r\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637646 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/66c6d48a-bdee-4f5b-b0ca-da05372e1ba2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn9ds\" (UID: \"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637667 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtmgb\" (UniqueName: \"kubernetes.io/projected/0e1907c2-4acb-475f-86ba-3526740ccd3a-kube-api-access-dtmgb\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637694 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090c6677-d6d6-4904-8c95-58c33fc2cc80-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637711 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-socket-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637726 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-mountpoint-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637743 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba2693e-b691-45ea-9447-95fc1da261ed-config-volume\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637758 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c1854052-ad41-4a6f-8538-2456b0008253-srv-cert\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637771 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5452b83a-9747-46f4-9353-33496cda70b3-srv-cert\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637786 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frmwl\" (UniqueName: \"kubernetes.io/projected/dd3a139e-483b-41e7-ac87-3d3a0f86a059-kube-api-access-frmwl\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637806 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f655fd22-4714-449e-aca8-6365f02bb397-node-bootstrap-token\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637821 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e577602e-26da-4f65-8997-38b52ae67d82-webhook-cert\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637838 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-registry-certificates\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637853 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-stats-auth\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637870 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9st6\" (UniqueName: \"kubernetes.io/projected/c4babbe7-9316-4110-8e66-193cb7ee0b2c-kube-api-access-b9st6\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637888 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9qc\" (UniqueName: \"kubernetes.io/projected/cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a-kube-api-access-jm9qc\") pod \"package-server-manager-789f6589d5-dh6cs\" (UID: \"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637905 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-csi-data-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637922 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637941 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-default-certificate\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637971 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e577602e-26da-4f65-8997-38b52ae67d82-apiservice-cert\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.637989 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954eb100-eded-479c-8ed9-0af63a167bcb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638003 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a97e96-7549-489a-9fc4-d71b1e01d8d5-cert\") pod \"ingress-canary-54g8c\" (UID: \"78a97e96-7549-489a-9fc4-d71b1e01d8d5\") " pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638020 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xxsh\" (UniqueName: \"kubernetes.io/projected/c7c16379-b692-4b0c-b4ea-968c97d75f6b-kube-api-access-7xxsh\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638037 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5bv\" (UniqueName: \"kubernetes.io/projected/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-kube-api-access-dk5bv\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638052 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-signing-key\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638071 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c08f8db-9c08-4b74-957a-52b0787df6c6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638091 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4babbe7-9316-4110-8e66-193cb7ee0b2c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638111 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-metrics-tls\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638130 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhdb\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-kube-api-access-jmhdb\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638150 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c1854052-ad41-4a6f-8538-2456b0008253-profile-collector-cert\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638169 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e577602e-26da-4f65-8997-38b52ae67d82-tmpfs\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638191 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17abb9f-88dc-4ed6-949e-c91bc349f478-config-volume\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638206 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdzc\" (UniqueName: \"kubernetes.io/projected/e17abb9f-88dc-4ed6-949e-c91bc349f478-kube-api-access-6gdzc\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638222 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f655fd22-4714-449e-aca8-6365f02bb397-certs\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638237 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf8b51f5-e358-44b0-874b-454aa6479a9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z4ldg\" (UID: \"bf8b51f5-e358-44b0-874b-454aa6479a9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638256 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmtz\" (UniqueName: \"kubernetes.io/projected/bf8b51f5-e358-44b0-874b-454aa6479a9e-kube-api-access-mzmtz\") pod \"multus-admission-controller-857f4d67dd-z4ldg\" (UID: \"bf8b51f5-e358-44b0-874b-454aa6479a9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638277 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmkf\" (UniqueName: \"kubernetes.io/projected/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-kube-api-access-cpmkf\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638315 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxxm\" (UniqueName: \"kubernetes.io/projected/f655fd22-4714-449e-aca8-6365f02bb397-kube-api-access-qhxxm\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638334 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7dee0d39-2211-4219-a780-bcf29f69425a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638354 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638381 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-metrics-certs\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638398 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxq4l\" (UniqueName: \"kubernetes.io/projected/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-kube-api-access-gxq4l\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638415 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7c16379-b692-4b0c-b4ea-968c97d75f6b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638435 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rdkc\" (UniqueName: \"kubernetes.io/projected/fb9eb323-2fa1-4562-a71f-ccb3f771395b-kube-api-access-9rdkc\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638462 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638482 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba2693e-b691-45ea-9447-95fc1da261ed-secret-volume\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638501 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7dee0d39-2211-4219-a780-bcf29f69425a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638527 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-config\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638543 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-serving-cert\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638556 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c08f8db-9c08-4b74-957a-52b0787df6c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638573 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krbwn\" (UniqueName: \"kubernetes.io/projected/dba2693e-b691-45ea-9447-95fc1da261ed-kube-api-access-krbwn\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638589 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-registration-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638603 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5452b83a-9747-46f4-9353-33496cda70b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638622 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-trusted-ca\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638636 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vwbp\" (UniqueName: \"kubernetes.io/projected/e577602e-26da-4f65-8997-38b52ae67d82-kube-api-access-4vwbp\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638650 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-bound-sa-token\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638664 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-plugins-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638682 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnd72\" (UniqueName: \"kubernetes.io/projected/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-kube-api-access-fnd72\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638704 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7n4k\" (UniqueName: \"kubernetes.io/projected/78a97e96-7549-489a-9fc4-d71b1e01d8d5-kube-api-access-f7n4k\") pod \"ingress-canary-54g8c\" (UID: \"78a97e96-7549-489a-9fc4-d71b1e01d8d5\") " pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638719 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954eb100-eded-479c-8ed9-0af63a167bcb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638735 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e1907c2-4acb-475f-86ba-3526740ccd3a-proxy-tls\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638751 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmnw\" (UniqueName: \"kubernetes.io/projected/c1854052-ad41-4a6f-8538-2456b0008253-kube-api-access-kcmnw\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638766 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090c6677-d6d6-4904-8c95-58c33fc2cc80-config\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638783 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-service-ca-bundle\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638805 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e17abb9f-88dc-4ed6-949e-c91bc349f478-metrics-tls\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638821 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e1907c2-4acb-475f-86ba-3526740ccd3a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638838 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c7c16379-b692-4b0c-b4ea-968c97d75f6b-images\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638855 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljsj\" (UniqueName: \"kubernetes.io/projected/66c6d48a-bdee-4f5b-b0ca-da05372e1ba2-kube-api-access-8ljsj\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn9ds\" (UID: \"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638870 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/090c6677-d6d6-4904-8c95-58c33fc2cc80-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638882 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-mountpoint-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638887 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dh6cs\" (UID: \"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638926 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-trusted-ca\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638946 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldgm\" (UniqueName: \"kubernetes.io/projected/e968f93e-d3da-4072-9c98-ebf25aff6bc2-kube-api-access-wldgm\") pod \"migrator-59844c95c7-wgvnk\" (UID: \"e968f93e-d3da-4072-9c98-ebf25aff6bc2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638971 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c08f8db-9c08-4b74-957a-52b0787df6c6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.639573 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c08f8db-9c08-4b74-957a-52b0787df6c6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.658112 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-trusted-ca\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.659664 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-signing-cabundle\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.660618 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba2693e-b691-45ea-9447-95fc1da261ed-config-volume\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.660837 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954eb100-eded-479c-8ed9-0af63a167bcb-config\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.661233 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-csi-data-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.661299 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4babbe7-9316-4110-8e66-193cb7ee0b2c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.661457 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-registry-certificates\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.661780 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e577602e-26da-4f65-8997-38b52ae67d82-tmpfs\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.661895 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.161875821 +0000 UTC m=+152.661463793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.662669 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-registration-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.663927 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17abb9f-88dc-4ed6-949e-c91bc349f478-config-volume\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.664322 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/66c6d48a-bdee-4f5b-b0ca-da05372e1ba2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn9ds\" (UID: \"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.664496 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5452b83a-9747-46f4-9353-33496cda70b3-srv-cert\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.664970 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-default-certificate\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.665047 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-trusted-ca\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.665185 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-plugins-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.665599 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.665663 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c1854052-ad41-4a6f-8538-2456b0008253-profile-collector-cert\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.666293 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7c16379-b692-4b0c-b4ea-968c97d75f6b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.666553 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7dee0d39-2211-4219-a780-bcf29f69425a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.666876 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c7c16379-b692-4b0c-b4ea-968c97d75f6b-images\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.638853 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb9eb323-2fa1-4562-a71f-ccb3f771395b-socket-dir\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.667594 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-config\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.668138 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e577602e-26da-4f65-8997-38b52ae67d82-apiservice-cert\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.668718 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f655fd22-4714-449e-aca8-6365f02bb397-node-bootstrap-token\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.669717 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c1854052-ad41-4a6f-8538-2456b0008253-srv-cert\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.670706 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf8b51f5-e358-44b0-874b-454aa6479a9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z4ldg\" (UID: \"bf8b51f5-e358-44b0-874b-454aa6479a9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.670958 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4babbe7-9316-4110-8e66-193cb7ee0b2c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.671146 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f655fd22-4714-449e-aca8-6365f02bb397-certs\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.671677 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7dee0d39-2211-4219-a780-bcf29f69425a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.671693 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-service-ca-bundle\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.671920 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-stats-auth\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.676567 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7c16379-b692-4b0c-b4ea-968c97d75f6b-proxy-tls\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.677435 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-metrics-tls\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.677991 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a97e96-7549-489a-9fc4-d71b1e01d8d5-cert\") pod \"ingress-canary-54g8c\" (UID: \"78a97e96-7549-489a-9fc4-d71b1e01d8d5\") " pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.678639 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e577602e-26da-4f65-8997-38b52ae67d82-webhook-cert\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.684038 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.684582 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-metrics-certs\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.687081 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9m279"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.688006 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090c6677-d6d6-4904-8c95-58c33fc2cc80-config\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.688025 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090c6677-d6d6-4904-8c95-58c33fc2cc80-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.688975 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-serving-cert\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.689608 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dh6cs\" (UID: \"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.689868 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba2693e-b691-45ea-9447-95fc1da261ed-secret-volume\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.691720 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e1907c2-4acb-475f-86ba-3526740ccd3a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.692497 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-signing-key\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.700044 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5452b83a-9747-46f4-9353-33496cda70b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.700226 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954eb100-eded-479c-8ed9-0af63a167bcb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.700272 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e17abb9f-88dc-4ed6-949e-c91bc349f478-metrics-tls\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.702117 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e1907c2-4acb-475f-86ba-3526740ccd3a-proxy-tls\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.702179 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c08f8db-9c08-4b74-957a-52b0787df6c6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.702896 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhdb\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-kube-api-access-jmhdb\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.707785 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5bv\" (UniqueName: \"kubernetes.io/projected/c6355c4a-baf8-43cb-bbca-cf6f0e422b9f-kube-api-access-dk5bv\") pod \"service-ca-operator-777779d784-4g95m\" (UID: \"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.716326 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frmwl\" (UniqueName: \"kubernetes.io/projected/dd3a139e-483b-41e7-ac87-3d3a0f86a059-kube-api-access-frmwl\") pod \"marketplace-operator-79b997595-r78xm\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.726643 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-w2sql"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.733889 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xxsh\" (UniqueName: \"kubernetes.io/projected/c7c16379-b692-4b0c-b4ea-968c97d75f6b-kube-api-access-7xxsh\") pod \"machine-config-operator-74547568cd-p598l\" (UID: \"c7c16379-b692-4b0c-b4ea-968c97d75f6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: W0129 08:41:11.735554 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a3bbd5e_4071_4761_b455_e830e12dfa81.slice/crio-19a21b457cec013f3cb957d1496bc33853aa039b5f51204c72b384a13071e683 WatchSource:0}: Error finding container 19a21b457cec013f3cb957d1496bc33853aa039b5f51204c72b384a13071e683: Status 404 returned error can't find the container with id 19a21b457cec013f3cb957d1496bc33853aa039b5f51204c72b384a13071e683 Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.739913 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.740731 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.240690866 +0000 UTC m=+152.740278818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.752737 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7gxmb"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.764685 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.768289 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtmgb\" (UniqueName: \"kubernetes.io/projected/0e1907c2-4acb-475f-86ba-3526740ccd3a-kube-api-access-dtmgb\") pod \"machine-config-controller-84d6567774-2fq9s\" (UID: \"0e1907c2-4acb-475f-86ba-3526740ccd3a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.776820 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5l5r\" (UniqueName: \"kubernetes.io/projected/5452b83a-9747-46f4-9353-33496cda70b3-kube-api-access-d5l5r\") pod \"olm-operator-6b444d44fb-rr7qs\" (UID: \"5452b83a-9747-46f4-9353-33496cda70b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.789863 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.807306 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.810931 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-bound-sa-token\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.815197 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sp9n7"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.836799 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldgm\" (UniqueName: \"kubernetes.io/projected/e968f93e-d3da-4072-9c98-ebf25aff6bc2-kube-api-access-wldgm\") pod \"migrator-59844c95c7-wgvnk\" (UID: \"e968f93e-d3da-4072-9c98-ebf25aff6bc2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.843200 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.843587 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.343575065 +0000 UTC m=+152.843163017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.854880 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9qc\" (UniqueName: \"kubernetes.io/projected/cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a-kube-api-access-jm9qc\") pod \"package-server-manager-789f6589d5-dh6cs\" (UID: \"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.860892 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gf74n"] Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.877870 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c08f8db-9c08-4b74-957a-52b0787df6c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckp4c\" (UID: \"2c08f8db-9c08-4b74-957a-52b0787df6c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.915763 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmtz\" (UniqueName: \"kubernetes.io/projected/bf8b51f5-e358-44b0-874b-454aa6479a9e-kube-api-access-mzmtz\") pod \"multus-admission-controller-857f4d67dd-z4ldg\" (UID: \"bf8b51f5-e358-44b0-874b-454aa6479a9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.918124 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmkf\" (UniqueName: \"kubernetes.io/projected/8f1b85d0-d1d7-435f-aee3-2953e7a8ad83-kube-api-access-cpmkf\") pod \"router-default-5444994796-4v677\" (UID: \"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83\") " pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.934744 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxxm\" (UniqueName: \"kubernetes.io/projected/f655fd22-4714-449e-aca8-6365f02bb397-kube-api-access-qhxxm\") pod \"machine-config-server-xnzz7\" (UID: \"f655fd22-4714-449e-aca8-6365f02bb397\") " pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.943788 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.944037 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.444018814 +0000 UTC m=+152.943606776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.943964 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.944429 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:11 crc kubenswrapper[5031]: E0129 08:41:11.944892 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.444882099 +0000 UTC m=+152.944470051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.952256 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krbwn\" (UniqueName: \"kubernetes.io/projected/dba2693e-b691-45ea-9447-95fc1da261ed-kube-api-access-krbwn\") pod \"collect-profiles-29494590-g4jbz\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.966484 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.981156 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.982099 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9st6\" (UniqueName: \"kubernetes.io/projected/c4babbe7-9316-4110-8e66-193cb7ee0b2c-kube-api-access-b9st6\") pod \"kube-storage-version-migrator-operator-b67b599dd-g2xzt\" (UID: \"c4babbe7-9316-4110-8e66-193cb7ee0b2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.989656 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.997300 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdzc\" (UniqueName: \"kubernetes.io/projected/e17abb9f-88dc-4ed6-949e-c91bc349f478-kube-api-access-6gdzc\") pod \"dns-default-qwhkt\" (UID: \"e17abb9f-88dc-4ed6-949e-c91bc349f478\") " pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:11 crc kubenswrapper[5031]: I0129 08:41:11.997808 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.026075 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmnw\" (UniqueName: \"kubernetes.io/projected/c1854052-ad41-4a6f-8538-2456b0008253-kube-api-access-kcmnw\") pod \"catalog-operator-68c6474976-frcbb\" (UID: \"c1854052-ad41-4a6f-8538-2456b0008253\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.044815 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lbjm4"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.045442 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.046518 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.545969396 +0000 UTC m=+153.045557348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.053629 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vwbp\" (UniqueName: \"kubernetes.io/projected/e577602e-26da-4f65-8997-38b52ae67d82-kube-api-access-4vwbp\") pod \"packageserver-d55dfcdfc-tvddp\" (UID: \"e577602e-26da-4f65-8997-38b52ae67d82\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.063114 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.069215 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.070741 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-bound-sa-token\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.083300 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.089440 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnd72\" (UniqueName: \"kubernetes.io/projected/8f8e6a30-d6ac-4ab0-b342-b57665d86fe5-kube-api-access-fnd72\") pod \"service-ca-9c57cc56f-zbnth\" (UID: \"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.102057 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xnzz7" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.108728 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.110013 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.111098 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7n4k\" (UniqueName: \"kubernetes.io/projected/78a97e96-7549-489a-9fc4-d71b1e01d8d5-kube-api-access-f7n4k\") pod \"ingress-canary-54g8c\" (UID: \"78a97e96-7549-489a-9fc4-d71b1e01d8d5\") " pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.114879 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.126723 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.131345 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954eb100-eded-479c-8ed9-0af63a167bcb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s7qm7\" (UID: \"954eb100-eded-479c-8ed9-0af63a167bcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.140711 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxq4l\" (UniqueName: \"kubernetes.io/projected/c03dbc3e-7d90-446e-b328-0c7ce1fb9177-kube-api-access-gxq4l\") pod \"ingress-operator-5b745b69d9-67mcw\" (UID: \"c03dbc3e-7d90-446e-b328-0c7ce1fb9177\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.141087 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.148074 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.148421 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9"] Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.149986 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.649970046 +0000 UTC m=+153.149557998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.154192 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jx726"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.164078 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rdkc\" (UniqueName: \"kubernetes.io/projected/fb9eb323-2fa1-4562-a71f-ccb3f771395b-kube-api-access-9rdkc\") pod \"csi-hostpathplugin-tsvjs\" (UID: \"fb9eb323-2fa1-4562-a71f-ccb3f771395b\") " pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.171650 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" event={"ID":"d9509194-11a3-49da-89be-f1c25b0b4268","Type":"ContainerStarted","Data":"7965477b20409e75991960cc8e3b6c35ea68161457c902b41a3315376e155a5d"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.175460 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.185117 5031 generic.go:334] "Generic (PLEG): container finished" podID="4731ec6c-5138-422c-8591-fc405d201db7" containerID="275689da92d44df6dfd54833ecfd26d2ba5b20e313159f6f00b8ef151e558050" exitCode=0 Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.185211 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" event={"ID":"4731ec6c-5138-422c-8591-fc405d201db7","Type":"ContainerDied","Data":"275689da92d44df6dfd54833ecfd26d2ba5b20e313159f6f00b8ef151e558050"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.185244 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" event={"ID":"4731ec6c-5138-422c-8591-fc405d201db7","Type":"ContainerStarted","Data":"28be63e42158eaa4fbe36a3123d4e5a8020493c147055e7c52241f8aa8d43643"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.191730 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-54g8c" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.197491 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fmrqw"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.199204 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.213393 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/090c6677-d6d6-4904-8c95-58c33fc2cc80-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5d45t\" (UID: \"090c6677-d6d6-4904-8c95-58c33fc2cc80\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.216125 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljsj\" (UniqueName: \"kubernetes.io/projected/66c6d48a-bdee-4f5b-b0ca-da05372e1ba2-kube-api-access-8ljsj\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn9ds\" (UID: \"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.219406 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" event={"ID":"854630f2-aa2a-4626-a201-a65f7ea05a9a","Type":"ContainerStarted","Data":"bf4b96c7bb39f43413414ceb53b6412ab74326154e0aca108ec3b71a95507650"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.219463 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" event={"ID":"854630f2-aa2a-4626-a201-a65f7ea05a9a","Type":"ContainerStarted","Data":"baa868c0991a1963ceb63e9ec2b23121dbc3f2398d9085c67136129b1e9a67b1"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.219474 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" event={"ID":"854630f2-aa2a-4626-a201-a65f7ea05a9a","Type":"ContainerStarted","Data":"8d8fa565853b79c2ea35dc33a68ca7dfacc300cb7d88f8691cc122a86958a40c"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.247276 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" event={"ID":"8a3bbd5e-4071-4761-b455-e830e12dfa81","Type":"ContainerStarted","Data":"b3f8a76898fe0604e5633721a77146f310a1330c0ffd22ae843b9dd85c3c0b19"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.247468 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" event={"ID":"8a3bbd5e-4071-4761-b455-e830e12dfa81","Type":"ContainerStarted","Data":"19a21b457cec013f3cb957d1496bc33853aa039b5f51204c72b384a13071e683"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.249952 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.250505 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.750464638 +0000 UTC m=+153.250052590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.250897 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.251951 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.751907178 +0000 UTC m=+153.251495130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.253295 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.260515 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.272860 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.274216 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gf74n" event={"ID":"be39b067-ab63-48ca-9930-631d73d2811c","Type":"ContainerStarted","Data":"c0271f9328f52f60efe6ae94eb33067c51b47e31a607549d8d2e0dba5582b41e"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.275149 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.294052 5031 patch_prober.go:28] interesting pod/console-operator-58897d9998-gf74n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.294110 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gf74n" podUID="be39b067-ab63-48ca-9930-631d73d2811c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.300614 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.300666 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.304222 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.308468 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.308505 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sp9n7" event={"ID":"5f4e6cea-65e3-446f-9925-d63d00fc235f","Type":"ContainerStarted","Data":"52e7744a971ee6b75ad4411046904af9aac4e439b7692e733799497f912cd99c"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.308521 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sp9n7" event={"ID":"5f4e6cea-65e3-446f-9925-d63d00fc235f","Type":"ContainerStarted","Data":"b440affdada9ba234524cf77e3d7fc368c9d1997df870acee10fd06a61058bdb"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.308533 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4v677" event={"ID":"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83","Type":"ContainerStarted","Data":"90a416e36b15b0deb5d2018ca7d2c3c100270b2d83d3505ad879425cc2fc24a2"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.323734 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" event={"ID":"9e7bbdcb-3270-42af-bda0-e6bebab732a2","Type":"ContainerStarted","Data":"2f7c016f3f9f8148db2fd797e19cae1f39380507e34b1b0f60bf875ed078c620"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.323783 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" event={"ID":"9e7bbdcb-3270-42af-bda0-e6bebab732a2","Type":"ContainerStarted","Data":"c4c0f738913e593f7cdd2224755ba0689bc706efb7b6caa0ca0560e948f79c1c"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.325258 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.330127 5031 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rjzm6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.330184 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" podUID="9e7bbdcb-3270-42af-bda0-e6bebab732a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.338042 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" event={"ID":"8ad39856-53d1-4f86-9ebe-9477b4cd4106","Type":"ContainerStarted","Data":"6e1d89d6090c79c875af89e70b854ecdb0c5d3d3edd69a5ddfa505bde95ae9ee"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.343842 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" event={"ID":"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961","Type":"ContainerStarted","Data":"fd14046f959fbcab8f0998f07d786e3eade36d37cbe81f6566e761b29d5691a5"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.343874 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" event={"ID":"5a13e6f9-1b9c-4928-ab5e-aa9b30ff4961","Type":"ContainerStarted","Data":"e797321a64fc76c98fcac9560b13486dd8f13938733c7d45982db8cd50b42be3"} Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.352120 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.354046 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.854027365 +0000 UTC m=+153.353615337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.376056 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.395114 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.429665 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.455001 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.455885 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:12.955874364 +0000 UTC m=+153.455462316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.476995 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.485777 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vvvr9"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.550615 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" podStartSLOduration=127.550600534 podStartE2EDuration="2m7.550600534s" podCreationTimestamp="2026-01-29 08:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:12.548515786 +0000 UTC m=+153.048103738" watchObservedRunningTime="2026-01-29 08:41:12.550600534 +0000 UTC m=+153.050188486" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.556323 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.556694 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.056681295 +0000 UTC m=+153.556269237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.611585 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9m279" podStartSLOduration=127.61155603 podStartE2EDuration="2m7.61155603s" podCreationTimestamp="2026-01-29 08:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:12.576739605 +0000 UTC m=+153.076327567" watchObservedRunningTime="2026-01-29 08:41:12.61155603 +0000 UTC m=+153.111143982" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.657199 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.657478 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.157467324 +0000 UTC m=+153.657055266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.734591 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kvh98" podStartSLOduration=126.734566251 podStartE2EDuration="2m6.734566251s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:12.733127061 +0000 UTC m=+153.232715013" watchObservedRunningTime="2026-01-29 08:41:12.734566251 +0000 UTC m=+153.234154223" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.760759 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.770944 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.270904828 +0000 UTC m=+153.770492950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.771256 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4g95m"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.775574 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r78xm"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.814856 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.824841 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.852021 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.874249 5031 csr.go:261] certificate signing request csr-h5mws is approved, waiting to be issued Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.880137 5031 csr.go:257] certificate signing request csr-h5mws is issued Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.881941 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.883303 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.383285461 +0000 UTC m=+153.882873413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.902213 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs"] Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.980794 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" podStartSLOduration=126.980769739 podStartE2EDuration="2m6.980769739s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:12.975262965 +0000 UTC m=+153.474850917" watchObservedRunningTime="2026-01-29 08:41:12.980769739 +0000 UTC m=+153.480357691" Jan 29 08:41:12 crc kubenswrapper[5031]: I0129 08:41:12.984465 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:12 crc kubenswrapper[5031]: E0129 08:41:12.984866 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.484847313 +0000 UTC m=+153.984435265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.035207 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z4ldg"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.057395 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mp2n" podStartSLOduration=128.057356671 podStartE2EDuration="2m8.057356671s" podCreationTimestamp="2026-01-29 08:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:13.049289846 +0000 UTC m=+153.548877798" watchObservedRunningTime="2026-01-29 08:41:13.057356671 +0000 UTC m=+153.556944623" Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.064905 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.087272 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.087615 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.587598797 +0000 UTC m=+154.087186749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.168336 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gf74n" podStartSLOduration=127.168314766 podStartE2EDuration="2m7.168314766s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:13.147321609 +0000 UTC m=+153.646909581" watchObservedRunningTime="2026-01-29 08:41:13.168314766 +0000 UTC m=+153.667902718" Jan 29 08:41:13 crc kubenswrapper[5031]: W0129 08:41:13.185073 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode968f93e_d3da_4072_9c98_ebf25aff6bc2.slice/crio-71ed36ca0ac15ff11d69afca1a1753273d3b998d67c3c207d9a183189164d753 WatchSource:0}: Error finding container 71ed36ca0ac15ff11d69afca1a1753273d3b998d67c3c207d9a183189164d753: Status 404 returned error can't find the container with id 71ed36ca0ac15ff11d69afca1a1753273d3b998d67c3c207d9a183189164d753 Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.194125 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.194539 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.694522578 +0000 UTC m=+154.194110530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.261256 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sp9n7" podStartSLOduration=127.261235725 podStartE2EDuration="2m7.261235725s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:13.254503307 +0000 UTC m=+153.754091279" watchObservedRunningTime="2026-01-29 08:41:13.261235725 +0000 UTC m=+153.760823677" Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.270973 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p598l"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.297690 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.297984 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.797972823 +0000 UTC m=+154.297560775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.307497 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qwhkt"] Jan 29 08:41:13 crc kubenswrapper[5031]: W0129 08:41:13.307929 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7c16379_b692_4b0c_b4ea_968c97d75f6b.slice/crio-6852248c74f6086fd67878029fc593b060d0950662e675cccc0af957d9c79383 WatchSource:0}: Error finding container 6852248c74f6086fd67878029fc593b060d0950662e675cccc0af957d9c79383: Status 404 returned error can't find the container with id 6852248c74f6086fd67878029fc593b060d0950662e675cccc0af957d9c79383 Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.350425 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" event={"ID":"dd3a139e-483b-41e7-ac87-3d3a0f86a059","Type":"ContainerStarted","Data":"84531bf4e0cbc37793a052ef9f62af1b406dd57201463dee919951d4bfe86400"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.352194 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbjm4" event={"ID":"f07acf69-4876-413e-b098-b7074c7018c2","Type":"ContainerStarted","Data":"f3d9efe03cd5860068bc480a71c4b6263e4ec93317ae88038dcb8852f78910c5"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.352968 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" event={"ID":"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f","Type":"ContainerStarted","Data":"f618a791137b3db960565d852de96d08fadc43bae125d6bea0871c466ed083d5"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.356280 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" event={"ID":"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f","Type":"ContainerStarted","Data":"ce19f545fc30ebe3749c04391a9f991ae08d255a0db313bbfc591cdaea91bd19"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.362280 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" event={"ID":"bf8b51f5-e358-44b0-874b-454aa6479a9e","Type":"ContainerStarted","Data":"89196416ea605f59e42486cd1e20eb8ecf117dd8ffa21d2c73c3dc0a2d66d122"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.377974 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" event={"ID":"5452b83a-9747-46f4-9353-33496cda70b3","Type":"ContainerStarted","Data":"f84e651a411457e08da64ece1d73f25e29c1a37b3a701c1ad2e82f3dc2cc16ee"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.385754 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qwhkt" event={"ID":"e17abb9f-88dc-4ed6-949e-c91bc349f478","Type":"ContainerStarted","Data":"e85a864e2bed1e9b9fc6f6fb735bfed06d3264df929f6347a0df6ea096ac17bc"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.400002 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.400147 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.90010032 +0000 UTC m=+154.399706942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.400272 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.400544 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:13.900533593 +0000 UTC m=+154.400121545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.405084 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" event={"ID":"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219","Type":"ContainerStarted","Data":"b0dcced9c0dc5627ac10a8dc246264616c0ac05148c7f918cf2cb9c0df6240ed"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.410556 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" event={"ID":"e968f93e-d3da-4072-9c98-ebf25aff6bc2","Type":"ContainerStarted","Data":"71ed36ca0ac15ff11d69afca1a1753273d3b998d67c3c207d9a183189164d753"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.444904 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" event={"ID":"c7c16379-b692-4b0c-b4ea-968c97d75f6b","Type":"ContainerStarted","Data":"6852248c74f6086fd67878029fc593b060d0950662e675cccc0af957d9c79383"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.470746 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gf74n" event={"ID":"be39b067-ab63-48ca-9930-631d73d2811c","Type":"ContainerStarted","Data":"7e68b74b8aa0de20b21586b71f9063888286552fb58382f7f2cc687a7d1a766b"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.471970 5031 patch_prober.go:28] interesting pod/console-operator-58897d9998-gf74n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.472096 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gf74n" podUID="be39b067-ab63-48ca-9930-631d73d2811c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.476945 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xnzz7" event={"ID":"f655fd22-4714-449e-aca8-6365f02bb397","Type":"ContainerStarted","Data":"eb6abf9741cbf90de660c41c580ab7b0cbe2d3640a1209bc2e3373376f5543da"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.487708 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" event={"ID":"db9f8ea0-be69-4991-801b-4dea935a10b0","Type":"ContainerStarted","Data":"866ef4e9f84795efe3903b43c9d49f78849e6635d78cc4041289c34b1336bedf"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.502156 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.508446 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.008412871 +0000 UTC m=+154.508000823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.527420 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.568502 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tsvjs"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.570114 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.593843 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.593903 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-54g8c"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.601764 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" event={"ID":"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b","Type":"ContainerStarted","Data":"d4de11f8d133064eeec7d1cba93b4bc97d26a2c76953e3b4e86c45fefc316d80"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.604721 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.605732 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.105715192 +0000 UTC m=+154.605303144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.625611 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" event={"ID":"107ea484-2b37-42f5-a7d8-f844fa231948","Type":"ContainerStarted","Data":"cbfd59504ebf603cad5397fdc4f9e32434a4d80b0ef3675e3a21aaefa07597be"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.629713 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" event={"ID":"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a","Type":"ContainerStarted","Data":"426986eb73c7f91a277eed3b655e813fcbc904d2b27a0f7ecf8f4afa661348ca"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.640650 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" event={"ID":"0e1907c2-4acb-475f-86ba-3526740ccd3a","Type":"ContainerStarted","Data":"460703e2287c65fd113f231b88fbc7f64f97e153530be6ec79a6d66018825242"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.644373 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" event={"ID":"8ad39856-53d1-4f86-9ebe-9477b4cd4106","Type":"ContainerStarted","Data":"bc70e1f202ae23ad5fc04776ee18e0b443ed23038a42ed47b6c48736d06725d8"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.672277 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.672333 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" event={"ID":"8a3bbd5e-4071-4761-b455-e830e12dfa81","Type":"ContainerStarted","Data":"3c9983c749e3d6845f3773e1f369fe64c78b39fb33d9bbb48214a1549ea419bf"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.677602 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" event={"ID":"2c08f8db-9c08-4b74-957a-52b0787df6c6","Type":"ContainerStarted","Data":"d15ed2edf9a49782c1bb52e0bcec4a890ce9faa0a30fafb787f7b650d7ace00d"} Jan 29 08:41:13 crc kubenswrapper[5031]: W0129 08:41:13.691523 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a97e96_7549_489a_9fc4_d71b1e01d8d5.slice/crio-7c907e8a9539bf615f9349b77bde5ac532d70e36cf5ba12e0b5b17c5a88e6047 WatchSource:0}: Error finding container 7c907e8a9539bf615f9349b77bde5ac532d70e36cf5ba12e0b5b17c5a88e6047: Status 404 returned error can't find the container with id 7c907e8a9539bf615f9349b77bde5ac532d70e36cf5ba12e0b5b17c5a88e6047 Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.702353 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" event={"ID":"0d371617-7dd8-407f-b233-73ec3cd483e2","Type":"ContainerStarted","Data":"ada26aa64bdeb92080e5939a3033cfc50ec439eafbac1ec82c9e3fdc74e91226"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.707045 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.707702 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt"] Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.715531 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.215504104 +0000 UTC m=+154.715092056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.719790 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" event={"ID":"e1220954-121b-495d-b2e2-0bb75ce20ca8","Type":"ContainerStarted","Data":"48b6c8a181af7102a743e029c78a64becca05045b296f843ec952bf7e3afd344"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.744522 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4v677" event={"ID":"8f1b85d0-d1d7-435f-aee3-2953e7a8ad83","Type":"ContainerStarted","Data":"bb1ed71915df9cff72eae87cbfc5e1eb90bd4cd9b484a7c53c93e9d184f1c5c5"} Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.745164 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.745221 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.752927 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.753353 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.757329 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.759807 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp"] Jan 29 08:41:13 crc kubenswrapper[5031]: W0129 08:41:13.798794 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode577602e_26da_4f65_8997_38b52ae67d82.slice/crio-0aa934998eb3aab97eb208975563470d074ac45cb68eeb5fe20d3d2525d1a191 WatchSource:0}: Error finding container 0aa934998eb3aab97eb208975563470d074ac45cb68eeb5fe20d3d2525d1a191: Status 404 returned error can't find the container with id 0aa934998eb3aab97eb208975563470d074ac45cb68eeb5fe20d3d2525d1a191 Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.808778 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.811555 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.311537601 +0000 UTC m=+154.811125643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.832541 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zbnth"] Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.889807 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 08:36:12 +0000 UTC, rotation deadline is 2026-10-27 08:06:29.520247827 +0000 UTC Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.890218 5031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6503h25m15.630033895s for next certificate rotation Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.912391 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.912704 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.412660359 +0000 UTC m=+154.912248311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.913319 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:13 crc kubenswrapper[5031]: E0129 08:41:13.913774 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.41375966 +0000 UTC m=+154.913347612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.946922 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.955412 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:13 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:13 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:13 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:13 crc kubenswrapper[5031]: I0129 08:41:13.955462 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.017358 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.017579 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.517554714 +0000 UTC m=+155.017142666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.017832 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.018297 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.518282535 +0000 UTC m=+155.017870487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.118561 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.118712 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.618693643 +0000 UTC m=+155.118281595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.119151 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.119739 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.619687671 +0000 UTC m=+155.119275633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.134879 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" podStartSLOduration=128.134858786 podStartE2EDuration="2m8.134858786s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.133903259 +0000 UTC m=+154.633491231" watchObservedRunningTime="2026-01-29 08:41:14.134858786 +0000 UTC m=+154.634446728" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.219663 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.219940 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.719905256 +0000 UTC m=+155.219493208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.220281 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.220805 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.72079455 +0000 UTC m=+155.220382512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.280588 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-w2sql" podStartSLOduration=128.280570312 podStartE2EDuration="2m8.280570312s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.219167784 +0000 UTC m=+154.718755736" watchObservedRunningTime="2026-01-29 08:41:14.280570312 +0000 UTC m=+154.780158264" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.281327 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4v677" podStartSLOduration=128.281095107 podStartE2EDuration="2m8.281095107s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.279836312 +0000 UTC m=+154.779424264" watchObservedRunningTime="2026-01-29 08:41:14.281095107 +0000 UTC m=+154.780683059" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.324277 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.324788 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.824766539 +0000 UTC m=+155.324354491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.430442 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.431007 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:14.93098831 +0000 UTC m=+155.430576262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.444488 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7gxmb" podStartSLOduration=128.444469068 podStartE2EDuration="2m8.444469068s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.442693798 +0000 UTC m=+154.942281760" watchObservedRunningTime="2026-01-29 08:41:14.444469068 +0000 UTC m=+154.944057030" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.532418 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.532865 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.0328452 +0000 UTC m=+155.532433172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.633732 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.634631 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.134613487 +0000 UTC m=+155.634201429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.736111 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.736597 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.23657731 +0000 UTC m=+155.736165262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.762383 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-54g8c" event={"ID":"78a97e96-7549-489a-9fc4-d71b1e01d8d5","Type":"ContainerStarted","Data":"d2f9f378db949e113be1d80f028333b94fec68f5736e5f98ed5796de8796e815"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.762437 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-54g8c" event={"ID":"78a97e96-7549-489a-9fc4-d71b1e01d8d5","Type":"ContainerStarted","Data":"7c907e8a9539bf615f9349b77bde5ac532d70e36cf5ba12e0b5b17c5a88e6047"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.766488 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" event={"ID":"2c08f8db-9c08-4b74-957a-52b0787df6c6","Type":"ContainerStarted","Data":"9a746049ef3aa76b5a1f286e97731e7ca4d99376dc14efb373bfc9e18e9b6a72"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.792318 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" event={"ID":"e577602e-26da-4f65-8997-38b52ae67d82","Type":"ContainerStarted","Data":"4bf3a7baf75dbb819f2cae42eb61042b36deca907ad20fd3c8687177e273b060"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.792395 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" event={"ID":"e577602e-26da-4f65-8997-38b52ae67d82","Type":"ContainerStarted","Data":"0aa934998eb3aab97eb208975563470d074ac45cb68eeb5fe20d3d2525d1a191"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.792969 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.795403 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" event={"ID":"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5","Type":"ContainerStarted","Data":"a0df477ffd5123c264d6b5e9b03ed597e4a36e848dfb2f998f86faee61ac056b"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.795450 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" event={"ID":"8f8e6a30-d6ac-4ab0-b342-b57665d86fe5","Type":"ContainerStarted","Data":"193155e637e233e82c5159b61786e815876aa67f1535904f502d26d2c7c7362d"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.796069 5031 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tvddp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.796105 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" podUID="e577602e-26da-4f65-8997-38b52ae67d82" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.804777 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckp4c" podStartSLOduration=128.804757617 podStartE2EDuration="2m8.804757617s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.802222727 +0000 UTC m=+155.301810689" watchObservedRunningTime="2026-01-29 08:41:14.804757617 +0000 UTC m=+155.304345569" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.805892 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" event={"ID":"c7c16379-b692-4b0c-b4ea-968c97d75f6b","Type":"ContainerStarted","Data":"a6af59eba873d9b3e75fcdde8ef00b534d58fb1e3cf5679fce5ac3da3e54acf8"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.805925 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" event={"ID":"c7c16379-b692-4b0c-b4ea-968c97d75f6b","Type":"ContainerStarted","Data":"66a1b3cb90acbabea8e46260cde09071df3772766a4b0daab4a2e58662f6d08f"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.806597 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-54g8c" podStartSLOduration=5.806588399 podStartE2EDuration="5.806588399s" podCreationTimestamp="2026-01-29 08:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.779617294 +0000 UTC m=+155.279205246" watchObservedRunningTime="2026-01-29 08:41:14.806588399 +0000 UTC m=+155.306176351" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.818420 5031 generic.go:334] "Generic (PLEG): container finished" podID="0d371617-7dd8-407f-b233-73ec3cd483e2" containerID="e7105b9b0386ca2811986a803b290e06553f2ae514ab96d1ad8ef258a0d6a7ce" exitCode=0 Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.818597 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" event={"ID":"0d371617-7dd8-407f-b233-73ec3cd483e2","Type":"ContainerDied","Data":"e7105b9b0386ca2811986a803b290e06553f2ae514ab96d1ad8ef258a0d6a7ce"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.825734 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" event={"ID":"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b","Type":"ContainerStarted","Data":"38caea6d6d82b7b75f5312c927e6271bb4869424178cfae113a12bcc6f1ffe0b"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.826514 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.837767 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.841016 5031 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jx726 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.841051 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" podUID="b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.841085 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.341071813 +0000 UTC m=+155.840659765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.846962 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" event={"ID":"dba2693e-b691-45ea-9447-95fc1da261ed","Type":"ContainerStarted","Data":"f57ae943bbf74e86e3036199b0d8d647cca0d67f3dd5956ce749836cf1bd085c"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.847032 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" event={"ID":"dba2693e-b691-45ea-9447-95fc1da261ed","Type":"ContainerStarted","Data":"34d05975367dac2defc69b38eb9f575392b91519cf0d45c1ec28952056087232"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.852282 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zbnth" podStartSLOduration=128.852261586 podStartE2EDuration="2m8.852261586s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.846739071 +0000 UTC m=+155.346327033" watchObservedRunningTime="2026-01-29 08:41:14.852261586 +0000 UTC m=+155.351849548" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.888005 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" event={"ID":"e1220954-121b-495d-b2e2-0bb75ce20ca8","Type":"ContainerStarted","Data":"b73b3a89267b0433e4fad444d8c43e286cdbb14ec9393555707e68fb4b0cc40c"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.901197 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" podStartSLOduration=128.901175854 podStartE2EDuration="2m8.901175854s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.880173477 +0000 UTC m=+155.379761439" watchObservedRunningTime="2026-01-29 08:41:14.901175854 +0000 UTC m=+155.400763816" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.933962 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" event={"ID":"4731ec6c-5138-422c-8591-fc405d201db7","Type":"ContainerStarted","Data":"2c94cccd2dcd2c1c4bee686fc6faaa8ee9d68ab3a4840edd17f1be26dbe0240c"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.934102 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" podStartSLOduration=128.934076535 podStartE2EDuration="2m8.934076535s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.921113803 +0000 UTC m=+155.420701765" watchObservedRunningTime="2026-01-29 08:41:14.934076535 +0000 UTC m=+155.433664487" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.934688 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.939420 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:14 crc kubenswrapper[5031]: E0129 08:41:14.940910 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.440890656 +0000 UTC m=+155.940478618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.943537 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" event={"ID":"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2","Type":"ContainerStarted","Data":"0223e9e5e41a9cdc0ac54b10d8b10e8606336123b5eac358ef8add4505877350"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.943580 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" event={"ID":"66c6d48a-bdee-4f5b-b0ca-da05372e1ba2","Type":"ContainerStarted","Data":"bb29cd0675c76d6102d8dcd815620f81d1fc75e9578622bdf7f9db1e7157b50c"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.950276 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:14 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:14 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:14 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.950335 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.959031 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" event={"ID":"c1854052-ad41-4a6f-8538-2456b0008253","Type":"ContainerStarted","Data":"614b0f228a665f982a5df78580a9c7d73ef3bb9c7ed4b7172045e86d8d6b3ceb"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.959096 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" event={"ID":"c1854052-ad41-4a6f-8538-2456b0008253","Type":"ContainerStarted","Data":"11d4444a62404da8e34d4e69923df9049f757369c046c20ec00903c59dbd3950"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.960181 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.962482 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p598l" podStartSLOduration=128.962468559 podStartE2EDuration="2m8.962468559s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:14.960759742 +0000 UTC m=+155.460347694" watchObservedRunningTime="2026-01-29 08:41:14.962468559 +0000 UTC m=+155.462056531" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.964966 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xnzz7" event={"ID":"f655fd22-4714-449e-aca8-6365f02bb397","Type":"ContainerStarted","Data":"bf0e309a9ecc2070c0c634a7d4d487f1cc2377b16e90aa0a189ab0f7a3225e08"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.969346 5031 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frcbb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.969459 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" podUID="c1854052-ad41-4a6f-8538-2456b0008253" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.971550 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" event={"ID":"c6355c4a-baf8-43cb-bbca-cf6f0e422b9f","Type":"ContainerStarted","Data":"edfd7d445eb7098e1d4e192a98de99ad69c0c970697d0913d4178a071de8aba6"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.974051 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" event={"ID":"db9f8ea0-be69-4991-801b-4dea935a10b0","Type":"ContainerStarted","Data":"c64a3f175db171911a986272e1fae084d59d196e9216e7802eb3af049f7692bf"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.982505 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qwhkt" event={"ID":"e17abb9f-88dc-4ed6-949e-c91bc349f478","Type":"ContainerStarted","Data":"4b0eb52457511590a10c19cb6089f913bc0654712d17504b52daaf9b4ab6aa9c"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.983337 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.991633 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" event={"ID":"e968f93e-d3da-4072-9c98-ebf25aff6bc2","Type":"ContainerStarted","Data":"5f538c4ba46f0268c81b3a5f9e9a1c6e1904898220e2957f5136b64c65d41d11"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.991678 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" event={"ID":"e968f93e-d3da-4072-9c98-ebf25aff6bc2","Type":"ContainerStarted","Data":"9843f2f22dd9d4a7e02902d1bbd63f08e84fb819aed448942952932ff86c32c3"} Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.996294 5031 generic.go:334] "Generic (PLEG): container finished" podID="c3fd82ff-34b6-4e6c-97aa-0349b6cbf219" containerID="d0e0611972a217dadcb9b986b368d6e807e524690a6bdd23e1a5a999d676b6b0" exitCode=0 Jan 29 08:41:14 crc kubenswrapper[5031]: I0129 08:41:14.996584 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" event={"ID":"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219","Type":"ContainerDied","Data":"d0e0611972a217dadcb9b986b368d6e807e524690a6bdd23e1a5a999d676b6b0"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.024201 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" event={"ID":"3acd54ae-3c41-48d1-bb86-1ab7c36ab86f","Type":"ContainerStarted","Data":"da197da20139975602f6ce361d5f6f5d7c8c046ac79f800de816b73feb7ebb67"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.033234 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" event={"ID":"fb9eb323-2fa1-4562-a71f-ccb3f771395b","Type":"ContainerStarted","Data":"7b9747fe350ea38613318f2961de6864adec7bed7b40dbcf0741c781f168b998"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.042009 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.043456 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" podStartSLOduration=129.043437345 podStartE2EDuration="2m9.043437345s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.039022081 +0000 UTC m=+155.538610033" watchObservedRunningTime="2026-01-29 08:41:15.043437345 +0000 UTC m=+155.543025297" Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.045277 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.545258225 +0000 UTC m=+156.044846367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.053738 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" event={"ID":"c03dbc3e-7d90-446e-b328-0c7ce1fb9177","Type":"ContainerStarted","Data":"5b9eb0e2641db1b838ba22fa6620d590c39e257bb1f572d778766701c316d3dd"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.053795 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" event={"ID":"c03dbc3e-7d90-446e-b328-0c7ce1fb9177","Type":"ContainerStarted","Data":"50980734d69045c4b984c2fe5cd066be08c5446ba625375dd6065ba8d1ea1e23"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.068756 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" podStartSLOduration=129.068724642 podStartE2EDuration="2m9.068724642s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.065292886 +0000 UTC m=+155.564880848" watchObservedRunningTime="2026-01-29 08:41:15.068724642 +0000 UTC m=+155.568312594" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.084210 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" event={"ID":"0e1907c2-4acb-475f-86ba-3526740ccd3a","Type":"ContainerStarted","Data":"5e1318be44d9ad772803960692883559435cfa39839593052802cf87f7137533"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.084264 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" event={"ID":"0e1907c2-4acb-475f-86ba-3526740ccd3a","Type":"ContainerStarted","Data":"6b1dba257ae74ade66c85e21d9ccee6131df2b448c3ccac260911748efb5b773"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.103970 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" event={"ID":"bf8b51f5-e358-44b0-874b-454aa6479a9e","Type":"ContainerStarted","Data":"b168642b612ebd210f465a075bfe264e7d23875558d2ae707e7bbf986de04701"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.133636 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" event={"ID":"dd3a139e-483b-41e7-ac87-3d3a0f86a059","Type":"ContainerStarted","Data":"025168d9d6d0200cf18b7855e8b0d0d7a89a39941108b5db0b73482758ed6059"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.134757 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" podStartSLOduration=129.134733279 podStartE2EDuration="2m9.134733279s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.132458745 +0000 UTC m=+155.632046697" watchObservedRunningTime="2026-01-29 08:41:15.134733279 +0000 UTC m=+155.634321231" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.135103 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.139208 5031 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r78xm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.139258 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.143682 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.147424 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.647398993 +0000 UTC m=+156.146986935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.148032 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" event={"ID":"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a","Type":"ContainerStarted","Data":"eb18afea71df8a410af8830a1d6542bce5d2d6f03b4bc742b53e7c3cb26a5bd6"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.148086 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" event={"ID":"cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a","Type":"ContainerStarted","Data":"4426892abfd55ebd48de6a9758763b2679bb056eca77ca78cdc9140e99c14494"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.148926 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.156235 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" event={"ID":"c4babbe7-9316-4110-8e66-193cb7ee0b2c","Type":"ContainerStarted","Data":"f6998e7b1a19492a65ed6a0f12f8ac11c67675331f6021bf25f7a1b10e0c4dd0"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.156320 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" event={"ID":"c4babbe7-9316-4110-8e66-193cb7ee0b2c","Type":"ContainerStarted","Data":"b8f2a50ea64343f62301e41bcc1e6da86f98de057082b53da09d977a8a1054f0"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.167187 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn9ds" podStartSLOduration=129.167156946 podStartE2EDuration="2m9.167156946s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.157627629 +0000 UTC m=+155.657215581" watchObservedRunningTime="2026-01-29 08:41:15.167156946 +0000 UTC m=+155.666744898" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.194694 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" event={"ID":"5452b83a-9747-46f4-9353-33496cda70b3","Type":"ContainerStarted","Data":"68074cc7197025394cf260c3320e001dfce00ae01d71369dbd6c5ef76192ce89"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.195848 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.196688 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4g95m" podStartSLOduration=129.196660241 podStartE2EDuration="2m9.196660241s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.184499871 +0000 UTC m=+155.684087843" watchObservedRunningTime="2026-01-29 08:41:15.196660241 +0000 UTC m=+155.696248193" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.197154 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" event={"ID":"954eb100-eded-479c-8ed9-0af63a167bcb","Type":"ContainerStarted","Data":"c461d4947ce7cdc52ca34922f3f3aaffd30d6b584fbcd039c1bc8e555485126b"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.197180 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" event={"ID":"954eb100-eded-479c-8ed9-0af63a167bcb","Type":"ContainerStarted","Data":"00fe24176a386c23a334631d13ba3a14a293d723692ac5a1d33707d4c2ef13a7"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.197448 5031 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rr7qs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.197491 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" podUID="5452b83a-9747-46f4-9353-33496cda70b3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.199424 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" event={"ID":"090c6677-d6d6-4904-8c95-58c33fc2cc80","Type":"ContainerStarted","Data":"3ceba3b969609ebbf2b760541a8e42d3120c97d72838892ae3c69dd443002103"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.199455 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" event={"ID":"090c6677-d6d6-4904-8c95-58c33fc2cc80","Type":"ContainerStarted","Data":"a5bd9ea2c5d746876191f8044d36b391cc0de687e11c8e7f27083807a4deadd3"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.200824 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsjh" event={"ID":"107ea484-2b37-42f5-a7d8-f844fa231948","Type":"ContainerStarted","Data":"c71137dbc1f63dde43554a84305d92e74f0fd52fd2cdc5c78ebd84ae23b0c8a5"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.204322 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbjm4" event={"ID":"f07acf69-4876-413e-b098-b7074c7018c2","Type":"ContainerStarted","Data":"a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489"} Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.221039 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngjq9" podStartSLOduration=130.221010903 podStartE2EDuration="2m10.221010903s" podCreationTimestamp="2026-01-29 08:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.21270219 +0000 UTC m=+155.712290142" watchObservedRunningTime="2026-01-29 08:41:15.221010903 +0000 UTC m=+155.720598855" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.228319 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gf74n" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.246504 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.257107 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.757070341 +0000 UTC m=+156.256658493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.290211 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xnzz7" podStartSLOduration=7.290183688 podStartE2EDuration="7.290183688s" podCreationTimestamp="2026-01-29 08:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.287320607 +0000 UTC m=+155.786908569" watchObservedRunningTime="2026-01-29 08:41:15.290183688 +0000 UTC m=+155.789771630" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.290513 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" podStartSLOduration=129.290505896 podStartE2EDuration="2m9.290505896s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.257201535 +0000 UTC m=+155.756789497" watchObservedRunningTime="2026-01-29 08:41:15.290505896 +0000 UTC m=+155.790093848" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.352007 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.352173 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.85212915 +0000 UTC m=+156.351717102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.352567 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.352921 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.852906773 +0000 UTC m=+156.352494725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.366941 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4h5k" podStartSLOduration=129.366923015 podStartE2EDuration="2m9.366923015s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.329670372 +0000 UTC m=+155.829258324" watchObservedRunningTime="2026-01-29 08:41:15.366923015 +0000 UTC m=+155.866510967" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.408690 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qwhkt" podStartSLOduration=6.408666483 podStartE2EDuration="6.408666483s" podCreationTimestamp="2026-01-29 08:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.393884249 +0000 UTC m=+155.893472201" watchObservedRunningTime="2026-01-29 08:41:15.408666483 +0000 UTC m=+155.908254435" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.409009 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wgvnk" podStartSLOduration=129.409002842 podStartE2EDuration="2m9.409002842s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.370476333 +0000 UTC m=+155.870064285" watchObservedRunningTime="2026-01-29 08:41:15.409002842 +0000 UTC m=+155.908590794" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.429893 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" podStartSLOduration=129.429875576 podStartE2EDuration="2m9.429875576s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.428430265 +0000 UTC m=+155.928018227" watchObservedRunningTime="2026-01-29 08:41:15.429875576 +0000 UTC m=+155.929463528" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.453259 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.453550 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.953505097 +0000 UTC m=+156.453093049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.453760 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.454319 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:15.954311829 +0000 UTC m=+156.453899781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.466228 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-lbjm4" podStartSLOduration=129.466203322 podStartE2EDuration="2m9.466203322s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.466055358 +0000 UTC m=+155.965643310" watchObservedRunningTime="2026-01-29 08:41:15.466203322 +0000 UTC m=+155.965791274" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.531538 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fq9s" podStartSLOduration=129.531520709 podStartE2EDuration="2m9.531520709s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.514498043 +0000 UTC m=+156.014085995" watchObservedRunningTime="2026-01-29 08:41:15.531520709 +0000 UTC m=+156.031108661" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.532659 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g2xzt" podStartSLOduration=129.532655401 podStartE2EDuration="2m9.532655401s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.488936518 +0000 UTC m=+155.988524470" watchObservedRunningTime="2026-01-29 08:41:15.532655401 +0000 UTC m=+156.032243353" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.562238 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.564120 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.064081931 +0000 UTC m=+156.563669893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.582152 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" podStartSLOduration=129.582135836 podStartE2EDuration="2m9.582135836s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.561450617 +0000 UTC m=+156.061038569" watchObservedRunningTime="2026-01-29 08:41:15.582135836 +0000 UTC m=+156.081723788" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.607207 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7qm7" podStartSLOduration=129.607183887 podStartE2EDuration="2m9.607183887s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.605256062 +0000 UTC m=+156.104844014" watchObservedRunningTime="2026-01-29 08:41:15.607183887 +0000 UTC m=+156.106771849" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.611035 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5d45t" podStartSLOduration=129.611015283 podStartE2EDuration="2m9.611015283s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.584197953 +0000 UTC m=+156.083785905" watchObservedRunningTime="2026-01-29 08:41:15.611015283 +0000 UTC m=+156.110603235" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.666694 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" podStartSLOduration=129.66666938 podStartE2EDuration="2m9.66666938s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.634427208 +0000 UTC m=+156.134015170" watchObservedRunningTime="2026-01-29 08:41:15.66666938 +0000 UTC m=+156.166257332" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.668742 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.669396 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.169356656 +0000 UTC m=+156.668944608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.669701 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" podStartSLOduration=129.669690355 podStartE2EDuration="2m9.669690355s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.664878341 +0000 UTC m=+156.164466293" watchObservedRunningTime="2026-01-29 08:41:15.669690355 +0000 UTC m=+156.169278307" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.707869 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" podStartSLOduration=129.707845092 podStartE2EDuration="2m9.707845092s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:15.705699422 +0000 UTC m=+156.205287374" watchObservedRunningTime="2026-01-29 08:41:15.707845092 +0000 UTC m=+156.207433034" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.770833 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.771341 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.271311388 +0000 UTC m=+156.770899340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.874008 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.874714 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.37466476 +0000 UTC m=+156.874252722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.950957 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:15 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:15 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:15 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.951062 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:15 crc kubenswrapper[5031]: I0129 08:41:15.975693 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:15 crc kubenswrapper[5031]: E0129 08:41:15.976110 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.476090847 +0000 UTC m=+156.975678799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.078310 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.078913 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.578889072 +0000 UTC m=+157.078477024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.180467 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.180711 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.68067373 +0000 UTC m=+157.180261682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.180922 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.181385 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.68137807 +0000 UTC m=+157.180966022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.224302 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qwhkt" event={"ID":"e17abb9f-88dc-4ed6-949e-c91bc349f478","Type":"ContainerStarted","Data":"77544574240e0127dbabea7de622120a419f512071bf809ea5198e7ce5e1e7fe"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.226771 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" event={"ID":"c3fd82ff-34b6-4e6c-97aa-0349b6cbf219","Type":"ContainerStarted","Data":"ce4728441ab0b2ea31e45ba8350a763749989f0ea97c8ccf3e08d20e29d77e8c"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.228528 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" event={"ID":"fb9eb323-2fa1-4562-a71f-ccb3f771395b","Type":"ContainerStarted","Data":"e03160f52ece9c66408c04b96eb917888475e60fce2400b1581ce161f5577339"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.230312 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z4ldg" event={"ID":"bf8b51f5-e358-44b0-874b-454aa6479a9e","Type":"ContainerStarted","Data":"d48f6fb6d4634999b568dd5e354974eba6d03b0fb3e90a4054f6c88308c47e5c"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.233797 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" event={"ID":"0d371617-7dd8-407f-b233-73ec3cd483e2","Type":"ContainerStarted","Data":"0ad2851888507369f8ccd0592bedfe71ff5926c6b9303047a813ded052a4d618"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.233851 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" event={"ID":"0d371617-7dd8-407f-b233-73ec3cd483e2","Type":"ContainerStarted","Data":"a0944ae29435f9e2a49623e80af527c081436f777499d922a86e2e6ee619466e"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.236403 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vvvr9" event={"ID":"db9f8ea0-be69-4991-801b-4dea935a10b0","Type":"ContainerStarted","Data":"da7644a34221efdf82c598006464c21daa53dcce84f187572f8ee87fd96926da"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.241886 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-67mcw" event={"ID":"c03dbc3e-7d90-446e-b328-0c7ce1fb9177","Type":"ContainerStarted","Data":"694825733c9da8383508a087b2a671a6053589836fc4cc1b41c6710a95c2a56a"} Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.242870 5031 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frcbb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.242910 5031 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r78xm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.242931 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" podUID="c1854052-ad41-4a6f-8538-2456b0008253" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.242976 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.243486 5031 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tvddp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.243530 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" podUID="e577602e-26da-4f65-8997-38b52ae67d82" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.254836 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.275803 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" podStartSLOduration=130.27575663 podStartE2EDuration="2m10.27575663s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:16.274488355 +0000 UTC m=+156.774076307" watchObservedRunningTime="2026-01-29 08:41:16.27575663 +0000 UTC m=+156.775344582" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.286463 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.288436 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.788410244 +0000 UTC m=+157.287998196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.347795 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" podStartSLOduration=131.347772045 podStartE2EDuration="2m11.347772045s" podCreationTimestamp="2026-01-29 08:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:16.335241805 +0000 UTC m=+156.834829757" watchObservedRunningTime="2026-01-29 08:41:16.347772045 +0000 UTC m=+156.847359997" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.372249 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f8qtb" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.372329 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rr7qs" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.414794 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.415186 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:16.915174961 +0000 UTC m=+157.414762913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.469042 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.469510 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.490417 5031 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fmrqw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.490473 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" podUID="0d371617-7dd8-407f-b233-73ec3cd483e2" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.515934 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.516253 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.016227308 +0000 UTC m=+157.515815260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.564601 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.564652 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.574465 5031 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-bvrqv container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.574506 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" podUID="c3fd82ff-34b6-4e6c-97aa-0349b6cbf219" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.617939 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.618552 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.11853594 +0000 UTC m=+157.618123892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.718647 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.719213 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.219197317 +0000 UTC m=+157.718785269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.820181 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.820506 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.32049406 +0000 UTC m=+157.820082012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.920724 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.920869 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.420843558 +0000 UTC m=+157.920431510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.921020 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:16 crc kubenswrapper[5031]: E0129 08:41:16.921324 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.421316161 +0000 UTC m=+157.920904113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.949224 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:16 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:16 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:16 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:16 crc kubenswrapper[5031]: I0129 08:41:16.949294 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.022530 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.022705 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.522672226 +0000 UTC m=+158.022260198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.022793 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.023125 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.523114709 +0000 UTC m=+158.022702661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.123973 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.124176 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.624148305 +0000 UTC m=+158.123736257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.124298 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.124660 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.624648909 +0000 UTC m=+158.124236861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.225668 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.225788 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.725768569 +0000 UTC m=+158.225356521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.225971 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.226217 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.726207501 +0000 UTC m=+158.225795453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.249740 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" event={"ID":"fb9eb323-2fa1-4562-a71f-ccb3f771395b","Type":"ContainerStarted","Data":"89e131508fa45c151211a3936ae0cf98498b61b95481145505e4cc2701b47fb9"} Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.249875 5031 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r78xm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.249910 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.262042 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frcbb" Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.326945 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.327139 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.827113584 +0000 UTC m=+158.326701536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.327591 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.329149 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.82913477 +0000 UTC m=+158.328722722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.429036 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.429313 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.929269682 +0000 UTC m=+158.428857634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.429708 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.430195 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:17.930187608 +0000 UTC m=+158.429775560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.531468 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.531908 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.031882362 +0000 UTC m=+158.531470314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.634082 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.634538 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.134510563 +0000 UTC m=+158.634098515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.734968 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.735176 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.235146519 +0000 UTC m=+158.734734471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.735236 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.735538 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.2355245 +0000 UTC m=+158.735112452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.836592 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.836956 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.336940787 +0000 UTC m=+158.836528739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.939673 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:17 crc kubenswrapper[5031]: E0129 08:41:17.940444 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.44035211 +0000 UTC m=+158.939940062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.955149 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:17 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:17 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:17 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:17 crc kubenswrapper[5031]: I0129 08:41:17.955229 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.041218 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.041447 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.541418997 +0000 UTC m=+159.041006949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.041520 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.041905 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.54189554 +0000 UTC m=+159.041483492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.142168 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.142393 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.64232751 +0000 UTC m=+159.141915462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.142551 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.142877 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.642862956 +0000 UTC m=+159.142450898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.243699 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.243902 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.743876202 +0000 UTC m=+159.243464154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.244379 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.244693 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.744679094 +0000 UTC m=+159.244267046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.284271 5031 generic.go:334] "Generic (PLEG): container finished" podID="dba2693e-b691-45ea-9447-95fc1da261ed" containerID="f57ae943bbf74e86e3036199b0d8d647cca0d67f3dd5956ce749836cf1bd085c" exitCode=0 Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.317743 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" event={"ID":"dba2693e-b691-45ea-9447-95fc1da261ed","Type":"ContainerDied","Data":"f57ae943bbf74e86e3036199b0d8d647cca0d67f3dd5956ce749836cf1bd085c"} Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.317808 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" event={"ID":"fb9eb323-2fa1-4562-a71f-ccb3f771395b","Type":"ContainerStarted","Data":"1b96588890c10fe1983418d9782b8182fd9cc1cec797baa51ff8b5c90199ac24"} Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.346109 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.346398 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.846346958 +0000 UTC m=+159.345934920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.346786 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.347236 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.847224343 +0000 UTC m=+159.346812295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.388565 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5f9r7"] Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.421513 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.435079 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.447110 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5f9r7"] Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.447765 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.450635 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:18.950616955 +0000 UTC m=+159.450204907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.551729 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-catalog-content\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.551837 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55g95\" (UniqueName: \"kubernetes.io/projected/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-kube-api-access-55g95\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.551878 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.551910 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-utilities\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.552458 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.052439154 +0000 UTC m=+159.552027106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.573831 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dflqz"] Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.575626 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.580832 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.603515 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dflqz"] Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.652484 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.652794 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjsx\" (UniqueName: \"kubernetes.io/projected/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-kube-api-access-5vjsx\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.652864 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-catalog-content\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.652895 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-catalog-content\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.652953 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.152923865 +0000 UTC m=+159.652511817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.653049 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55g95\" (UniqueName: \"kubernetes.io/projected/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-kube-api-access-55g95\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.653111 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.653140 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-utilities\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.653169 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-utilities\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.653702 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.153677006 +0000 UTC m=+159.653264958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.653840 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-catalog-content\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.654034 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-utilities\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.695684 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55g95\" (UniqueName: \"kubernetes.io/projected/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-kube-api-access-55g95\") pod \"community-operators-5f9r7\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.753813 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.753952 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.253933241 +0000 UTC m=+159.753521193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.754384 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjsx\" (UniqueName: \"kubernetes.io/projected/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-kube-api-access-5vjsx\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.754415 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-catalog-content\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.754463 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.754490 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-utilities\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.754846 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-catalog-content\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.754892 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.254874437 +0000 UTC m=+159.754462379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.754906 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-utilities\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.768453 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brb4j"] Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.769677 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.795611 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.801647 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjsx\" (UniqueName: \"kubernetes.io/projected/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-kube-api-access-5vjsx\") pod \"certified-operators-dflqz\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.817052 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brb4j"] Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.855654 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.856298 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-utilities\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.856344 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-catalog-content\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.856414 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzqp5\" (UniqueName: \"kubernetes.io/projected/5dd13a7c-9e64-425b-b358-7e6657fa32ab-kube-api-access-mzqp5\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.856550 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.356527211 +0000 UTC m=+159.856115163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.907958 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.949426 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:18 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:18 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:18 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.949503 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.958264 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-utilities\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.958311 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.958334 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-catalog-content\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.958378 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzqp5\" (UniqueName: \"kubernetes.io/projected/5dd13a7c-9e64-425b-b358-7e6657fa32ab-kube-api-access-mzqp5\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.959647 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-utilities\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.960331 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-catalog-content\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:18 crc kubenswrapper[5031]: E0129 08:41:18.961124 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.461070466 +0000 UTC m=+159.960658558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.981473 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhkg4"] Jan 29 08:41:18 crc kubenswrapper[5031]: I0129 08:41:18.982654 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.003829 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhkg4"] Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.011519 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzqp5\" (UniqueName: \"kubernetes.io/projected/5dd13a7c-9e64-425b-b358-7e6657fa32ab-kube-api-access-mzqp5\") pod \"community-operators-brb4j\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.059558 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.059792 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-utilities\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.059852 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.559811078 +0000 UTC m=+160.059399030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.059973 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8fv\" (UniqueName: \"kubernetes.io/projected/b6115352-f309-492c-a7d9-c36ddb9e2454-kube-api-access-6k8fv\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.060052 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-catalog-content\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.060135 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.060623 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.560615371 +0000 UTC m=+160.060203323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.106336 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.162527 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.162841 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-utilities\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.162918 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8fv\" (UniqueName: \"kubernetes.io/projected/b6115352-f309-492c-a7d9-c36ddb9e2454-kube-api-access-6k8fv\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.162948 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-catalog-content\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.163463 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-catalog-content\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.163564 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.663545951 +0000 UTC m=+160.163133913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.164004 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-utilities\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.186505 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8fv\" (UniqueName: \"kubernetes.io/projected/b6115352-f309-492c-a7d9-c36ddb9e2454-kube-api-access-6k8fv\") pod \"certified-operators-bhkg4\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.264170 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.264482 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.764471114 +0000 UTC m=+160.264059066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.296812 5031 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.315753 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.365512 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.366069 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.866032905 +0000 UTC m=+160.365620857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.367157 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" event={"ID":"fb9eb323-2fa1-4562-a71f-ccb3f771395b","Type":"ContainerStarted","Data":"03684d5b4b5f3cd131c824be03cd844f3db7f5234f36e0c3ac000b402c4688c4"} Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.408799 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" podStartSLOduration=10.408769141 podStartE2EDuration="10.408769141s" podCreationTimestamp="2026-01-29 08:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:19.407595958 +0000 UTC m=+159.907183920" watchObservedRunningTime="2026-01-29 08:41:19.408769141 +0000 UTC m=+159.908357093" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.445243 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5f9r7"] Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.467354 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.468636 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:19.968522102 +0000 UTC m=+160.468110054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.560759 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dflqz"] Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.574534 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.574679 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.074642921 +0000 UTC m=+160.574230873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.574970 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.575276 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.075266419 +0000 UTC m=+160.574854371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: W0129 08:41:19.624522 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe2f9cf_9f00_48da_849a_29aa4b0e66ec.slice/crio-099f58b3e490a6c36a2c50c379f8e5ea70e8c0d1ffdb1ad37e60b70e03dd103d WatchSource:0}: Error finding container 099f58b3e490a6c36a2c50c379f8e5ea70e8c0d1ffdb1ad37e60b70e03dd103d: Status 404 returned error can't find the container with id 099f58b3e490a6c36a2c50c379f8e5ea70e8c0d1ffdb1ad37e60b70e03dd103d Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.677261 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.677814 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.177799497 +0000 UTC m=+160.677387449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.780010 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.780310 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.280298505 +0000 UTC m=+160.779886457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.790626 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brb4j"] Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.863115 5031 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T08:41:19.29684346Z","Handler":null,"Name":""} Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.889275 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.889623 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.389598353 +0000 UTC m=+160.889186305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.890178 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:19 crc kubenswrapper[5031]: E0129 08:41:19.891790 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.391778474 +0000 UTC m=+160.891366426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ll2lx" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.943075 5031 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.943126 5031 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.955715 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:19 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:19 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:19 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.955786 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:19 crc kubenswrapper[5031]: I0129 08:41:19.994699 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.015199 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.020713 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.048253 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhkg4"] Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.096887 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba2693e-b691-45ea-9447-95fc1da261ed-secret-volume\") pod \"dba2693e-b691-45ea-9447-95fc1da261ed\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.100120 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krbwn\" (UniqueName: \"kubernetes.io/projected/dba2693e-b691-45ea-9447-95fc1da261ed-kube-api-access-krbwn\") pod \"dba2693e-b691-45ea-9447-95fc1da261ed\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.100159 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba2693e-b691-45ea-9447-95fc1da261ed-config-volume\") pod \"dba2693e-b691-45ea-9447-95fc1da261ed\" (UID: \"dba2693e-b691-45ea-9447-95fc1da261ed\") " Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.100424 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.101241 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dba2693e-b691-45ea-9447-95fc1da261ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "dba2693e-b691-45ea-9447-95fc1da261ed" (UID: "dba2693e-b691-45ea-9447-95fc1da261ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.105937 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba2693e-b691-45ea-9447-95fc1da261ed-kube-api-access-krbwn" (OuterVolumeSpecName: "kube-api-access-krbwn") pod "dba2693e-b691-45ea-9447-95fc1da261ed" (UID: "dba2693e-b691-45ea-9447-95fc1da261ed"). InnerVolumeSpecName "kube-api-access-krbwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.108226 5031 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.108277 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.174638 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dba2693e-b691-45ea-9447-95fc1da261ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dba2693e-b691-45ea-9447-95fc1da261ed" (UID: "dba2693e-b691-45ea-9447-95fc1da261ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.193027 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ll2lx\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.202187 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krbwn\" (UniqueName: \"kubernetes.io/projected/dba2693e-b691-45ea-9447-95fc1da261ed-kube-api-access-krbwn\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.202216 5031 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba2693e-b691-45ea-9447-95fc1da261ed-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.202226 5031 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba2693e-b691-45ea-9447-95fc1da261ed-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.293324 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.318231 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.370861 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hg9"] Jan 29 08:41:20 crc kubenswrapper[5031]: E0129 08:41:20.371181 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba2693e-b691-45ea-9447-95fc1da261ed" containerName="collect-profiles" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.371202 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba2693e-b691-45ea-9447-95fc1da261ed" containerName="collect-profiles" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.371347 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba2693e-b691-45ea-9447-95fc1da261ed" containerName="collect-profiles" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.375009 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.377411 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.393298 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" event={"ID":"dba2693e-b691-45ea-9447-95fc1da261ed","Type":"ContainerDied","Data":"34d05975367dac2defc69b38eb9f575392b91519cf0d45c1ec28952056087232"} Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.393346 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d05975367dac2defc69b38eb9f575392b91519cf0d45c1ec28952056087232" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.393448 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.398823 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hg9"] Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.402613 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brb4j" event={"ID":"5dd13a7c-9e64-425b-b358-7e6657fa32ab","Type":"ContainerStarted","Data":"248d153bc7f4d4335234ecfe50075f01480a57cb34cb6fedb46484f33b94f5da"} Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.426555 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerStarted","Data":"31bc60f84331edf2f10a3054cb8828016e4ea5e3c1b40c56cb836bdebe1372eb"} Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.426603 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerStarted","Data":"099f58b3e490a6c36a2c50c379f8e5ea70e8c0d1ffdb1ad37e60b70e03dd103d"} Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.449031 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerStarted","Data":"7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad"} Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.449092 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerStarted","Data":"10b1d35c8691db7c915a494fa24bf26c3c590d27f8bd3fd6eda648f91de0b949"} Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.459687 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkg4" event={"ID":"b6115352-f309-492c-a7d9-c36ddb9e2454","Type":"ContainerStarted","Data":"1ab59214313cd989532506e8231ccd9d0bf62ff51e1017e7817c4ba345e05a84"} Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.508929 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-utilities\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.508972 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqbdv\" (UniqueName: \"kubernetes.io/projected/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-kube-api-access-pqbdv\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.509029 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-catalog-content\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.610889 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-utilities\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.611233 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqbdv\" (UniqueName: \"kubernetes.io/projected/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-kube-api-access-pqbdv\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.611289 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-catalog-content\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.611970 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-catalog-content\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.612176 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-utilities\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.671069 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqbdv\" (UniqueName: \"kubernetes.io/projected/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-kube-api-access-pqbdv\") pod \"redhat-marketplace-m9hg9\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.706859 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.769856 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8gmmw"] Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.771838 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.786162 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gmmw"] Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.852330 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.852993 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.856488 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.856661 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.861681 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.878922 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll2lx"] Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.918102 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7222\" (UniqueName: \"kubernetes.io/projected/4ecedf13-919d-482a-bfa7-71e66368c9ef-kube-api-access-z7222\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.918199 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52b17e33-c350-4d10-b649-12e737846100-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.918247 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52b17e33-c350-4d10-b649-12e737846100-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.918303 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-utilities\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.918326 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-catalog-content\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.949028 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:20 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:20 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:20 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:20 crc kubenswrapper[5031]: I0129 08:41:20.949406 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.019496 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7222\" (UniqueName: \"kubernetes.io/projected/4ecedf13-919d-482a-bfa7-71e66368c9ef-kube-api-access-z7222\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.019590 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52b17e33-c350-4d10-b649-12e737846100-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.019661 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52b17e33-c350-4d10-b649-12e737846100-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.019729 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52b17e33-c350-4d10-b649-12e737846100-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.019782 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-utilities\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.019803 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-catalog-content\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.020386 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-catalog-content\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.020979 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-utilities\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.041040 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52b17e33-c350-4d10-b649-12e737846100-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.043607 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7222\" (UniqueName: \"kubernetes.io/projected/4ecedf13-919d-482a-bfa7-71e66368c9ef-kube-api-access-z7222\") pod \"redhat-marketplace-8gmmw\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.084962 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hg9"] Jan 29 08:41:21 crc kubenswrapper[5031]: W0129 08:41:21.090993 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad4a529c_a8ab_47c5_84cd_44002bebb7ce.slice/crio-6f7c84a84146ec1bf5386600a2f6c41ebd9227c86b95feb0ef8d1d8f458133e5 WatchSource:0}: Error finding container 6f7c84a84146ec1bf5386600a2f6c41ebd9227c86b95feb0ef8d1d8f458133e5: Status 404 returned error can't find the container with id 6f7c84a84146ec1bf5386600a2f6c41ebd9227c86b95feb0ef8d1d8f458133e5 Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.144221 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.179320 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.295986 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.295988 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.296320 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.296395 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.467191 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gmmw"] Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.471601 5031 generic.go:334] "Generic (PLEG): container finished" podID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerID="7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad" exitCode=0 Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.471679 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerDied","Data":"7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.473675 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.476951 5031 generic.go:334] "Generic (PLEG): container finished" podID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerID="6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd" exitCode=0 Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.477012 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkg4" event={"ID":"b6115352-f309-492c-a7d9-c36ddb9e2454","Type":"ContainerDied","Data":"6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.485416 5031 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fmrqw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]log ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]etcd ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/max-in-flight-filter ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 08:41:21 crc kubenswrapper[5031]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/openshift.io-startinformers ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 08:41:21 crc kubenswrapper[5031]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 08:41:21 crc kubenswrapper[5031]: livez check failed Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.485487 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" podUID="0d371617-7dd8-407f-b233-73ec3cd483e2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.503283 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" event={"ID":"7dee0d39-2211-4219-a780-bcf29f69425a","Type":"ContainerStarted","Data":"e0a5ec387534c6c1f6e123a5e5a6096bee1f79108d65e43d62a5f84acc47eabc"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.503328 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" event={"ID":"7dee0d39-2211-4219-a780-bcf29f69425a","Type":"ContainerStarted","Data":"19145aecd6523579c39007e1366095f2e4984fbbe91c6d42177b75ce026d8958"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.503551 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.535717 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerID="1dc033b9449e6c9edc85f4a5a1b39e291b6354296db200a517af8860f10c572b" exitCode=0 Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.535821 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hg9" event={"ID":"ad4a529c-a8ab-47c5-84cd-44002bebb7ce","Type":"ContainerDied","Data":"1dc033b9449e6c9edc85f4a5a1b39e291b6354296db200a517af8860f10c572b"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.535846 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hg9" event={"ID":"ad4a529c-a8ab-47c5-84cd-44002bebb7ce","Type":"ContainerStarted","Data":"6f7c84a84146ec1bf5386600a2f6c41ebd9227c86b95feb0ef8d1d8f458133e5"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.556301 5031 generic.go:334] "Generic (PLEG): container finished" podID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerID="1eedcdb42873991b1c2e7d87f58d12786027e63e5785ba5777ac777207c2f273" exitCode=0 Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.556883 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brb4j" event={"ID":"5dd13a7c-9e64-425b-b358-7e6657fa32ab","Type":"ContainerDied","Data":"1eedcdb42873991b1c2e7d87f58d12786027e63e5785ba5777ac777207c2f273"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.570103 5031 generic.go:334] "Generic (PLEG): container finished" podID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerID="31bc60f84331edf2f10a3054cb8828016e4ea5e3c1b40c56cb836bdebe1372eb" exitCode=0 Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.570147 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerDied","Data":"31bc60f84331edf2f10a3054cb8828016e4ea5e3c1b40c56cb836bdebe1372eb"} Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.573009 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.574447 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.580760 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bvrqv" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.590128 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.590671 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.592683 5031 patch_prober.go:28] interesting pod/console-f9d7485db-lbjm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.592732 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lbjm4" podUID="f07acf69-4876-413e-b098-b7074c7018c2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.605219 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" podStartSLOduration=135.605199599 podStartE2EDuration="2m15.605199599s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:21.578396859 +0000 UTC m=+162.077984841" watchObservedRunningTime="2026-01-29 08:41:21.605199599 +0000 UTC m=+162.104787551" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.765253 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-627gc"] Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.766309 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.768703 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.775073 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.792072 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-627gc"] Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.838683 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-catalog-content\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.838761 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lzg6\" (UniqueName: \"kubernetes.io/projected/dd2c0807-7bcf-435a-8961-fdef958e6c53-kube-api-access-5lzg6\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.838810 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-utilities\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.940225 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-catalog-content\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.940288 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lzg6\" (UniqueName: \"kubernetes.io/projected/dd2c0807-7bcf-435a-8961-fdef958e6c53-kube-api-access-5lzg6\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.940314 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-utilities\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.940711 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-catalog-content\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.940796 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-utilities\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.944732 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.947837 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:21 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:21 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:21 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.948141 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:21 crc kubenswrapper[5031]: I0129 08:41:21.960129 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lzg6\" (UniqueName: \"kubernetes.io/projected/dd2c0807-7bcf-435a-8961-fdef958e6c53-kube-api-access-5lzg6\") pod \"redhat-operators-627gc\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.095968 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.127528 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.162492 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-59md2"] Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.163727 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.196557 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-59md2"] Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.252083 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-utilities\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.252210 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-catalog-content\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.252271 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4gm\" (UniqueName: \"kubernetes.io/projected/0c7d881b-8764-42f1-a4db-87cde90a3a70-kube-api-access-zn4gm\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.365036 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-utilities\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.365481 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-catalog-content\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.365561 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4gm\" (UniqueName: \"kubernetes.io/projected/0c7d881b-8764-42f1-a4db-87cde90a3a70-kube-api-access-zn4gm\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.365746 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-utilities\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.366032 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-catalog-content\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.388017 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4gm\" (UniqueName: \"kubernetes.io/projected/0c7d881b-8764-42f1-a4db-87cde90a3a70-kube-api-access-zn4gm\") pod \"redhat-operators-59md2\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.488552 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.583418 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"52b17e33-c350-4d10-b649-12e737846100","Type":"ContainerStarted","Data":"c0240a1e6252ad79edcb6527dec76020b653f5f831bb8c362620a62b68f76c52"} Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.583461 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"52b17e33-c350-4d10-b649-12e737846100","Type":"ContainerStarted","Data":"da396c4703061cd7aa332d4455d9d0ff6dd63715532b9a4bf07d904c9ba44c7f"} Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.585132 5031 generic.go:334] "Generic (PLEG): container finished" podID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerID="9a11c41b52d590b9fdf68564f04868a6d2deedcbcd8b7e9a8f457bcf0bf299e7" exitCode=0 Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.585514 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gmmw" event={"ID":"4ecedf13-919d-482a-bfa7-71e66368c9ef","Type":"ContainerDied","Data":"9a11c41b52d590b9fdf68564f04868a6d2deedcbcd8b7e9a8f457bcf0bf299e7"} Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.585581 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gmmw" event={"ID":"4ecedf13-919d-482a-bfa7-71e66368c9ef","Type":"ContainerStarted","Data":"99bb78e0754aeeed69936aa10d9743e79316b360138af43598a78292af6ce0ba"} Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.606945 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.606929584 podStartE2EDuration="2.606929584s" podCreationTimestamp="2026-01-29 08:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:22.606392518 +0000 UTC m=+163.105980480" watchObservedRunningTime="2026-01-29 08:41:22.606929584 +0000 UTC m=+163.106517526" Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.751322 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-627gc"] Jan 29 08:41:22 crc kubenswrapper[5031]: W0129 08:41:22.789923 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd2c0807_7bcf_435a_8961_fdef958e6c53.slice/crio-404f8b03daa95847aa8806c038a0f0a0214664a790cbbb2c8ed546f4796f04eb WatchSource:0}: Error finding container 404f8b03daa95847aa8806c038a0f0a0214664a790cbbb2c8ed546f4796f04eb: Status 404 returned error can't find the container with id 404f8b03daa95847aa8806c038a0f0a0214664a790cbbb2c8ed546f4796f04eb Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.901027 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-59md2"] Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.952714 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:22 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:22 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:22 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:22 crc kubenswrapper[5031]: I0129 08:41:22.952795 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.492852 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.493571 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.498894 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.499976 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.503781 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.588480 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.588584 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.597447 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59md2" event={"ID":"0c7d881b-8764-42f1-a4db-87cde90a3a70","Type":"ContainerStarted","Data":"75c53a40bbbe0fce21357b29cf51dacd2f5934d6a88b09423e227198f4d2856b"} Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.600687 5031 generic.go:334] "Generic (PLEG): container finished" podID="52b17e33-c350-4d10-b649-12e737846100" containerID="c0240a1e6252ad79edcb6527dec76020b653f5f831bb8c362620a62b68f76c52" exitCode=0 Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.600789 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"52b17e33-c350-4d10-b649-12e737846100","Type":"ContainerDied","Data":"c0240a1e6252ad79edcb6527dec76020b653f5f831bb8c362620a62b68f76c52"} Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.603482 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-627gc" event={"ID":"dd2c0807-7bcf-435a-8961-fdef958e6c53","Type":"ContainerStarted","Data":"404f8b03daa95847aa8806c038a0f0a0214664a790cbbb2c8ed546f4796f04eb"} Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.689823 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.689917 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.689994 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.711557 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.827030 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.947771 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:23 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:23 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:23 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:23 crc kubenswrapper[5031]: I0129 08:41:23.947821 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.149111 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qwhkt" Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.395890 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 08:41:24 crc kubenswrapper[5031]: W0129 08:41:24.408848 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8381c2a9_1b66_49cc_99ed_766f70d3c55e.slice/crio-854a047c4fdd3cb69a232d14dc9de885dbac22bb5c88d25d6e32455673b952fa WatchSource:0}: Error finding container 854a047c4fdd3cb69a232d14dc9de885dbac22bb5c88d25d6e32455673b952fa: Status 404 returned error can't find the container with id 854a047c4fdd3cb69a232d14dc9de885dbac22bb5c88d25d6e32455673b952fa Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.695911 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8381c2a9-1b66-49cc-99ed-766f70d3c55e","Type":"ContainerStarted","Data":"854a047c4fdd3cb69a232d14dc9de885dbac22bb5c88d25d6e32455673b952fa"} Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.700111 5031 generic.go:334] "Generic (PLEG): container finished" podID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerID="103951b064d0fedffe647c9143ad0e7ba07707771488eb0a477c04afb69a92cf" exitCode=0 Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.700220 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-627gc" event={"ID":"dd2c0807-7bcf-435a-8961-fdef958e6c53","Type":"ContainerDied","Data":"103951b064d0fedffe647c9143ad0e7ba07707771488eb0a477c04afb69a92cf"} Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.707394 5031 generic.go:334] "Generic (PLEG): container finished" podID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerID="ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4" exitCode=0 Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.707504 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59md2" event={"ID":"0c7d881b-8764-42f1-a4db-87cde90a3a70","Type":"ContainerDied","Data":"ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4"} Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.948689 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:24 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:24 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:24 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:24 crc kubenswrapper[5031]: I0129 08:41:24.949397 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.153120 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.248905 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52b17e33-c350-4d10-b649-12e737846100-kube-api-access\") pod \"52b17e33-c350-4d10-b649-12e737846100\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.248986 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52b17e33-c350-4d10-b649-12e737846100-kubelet-dir\") pod \"52b17e33-c350-4d10-b649-12e737846100\" (UID: \"52b17e33-c350-4d10-b649-12e737846100\") " Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.249091 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52b17e33-c350-4d10-b649-12e737846100-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "52b17e33-c350-4d10-b649-12e737846100" (UID: "52b17e33-c350-4d10-b649-12e737846100"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.249656 5031 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52b17e33-c350-4d10-b649-12e737846100-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.294663 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b17e33-c350-4d10-b649-12e737846100-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "52b17e33-c350-4d10-b649-12e737846100" (UID: "52b17e33-c350-4d10-b649-12e737846100"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.351188 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52b17e33-c350-4d10-b649-12e737846100-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.749061 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8381c2a9-1b66-49cc-99ed-766f70d3c55e","Type":"ContainerStarted","Data":"1c958f9c6d1d3f50c37ca13152585a899903d6e9e58db54d433a3a8496b3063e"} Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.760801 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"52b17e33-c350-4d10-b649-12e737846100","Type":"ContainerDied","Data":"da396c4703061cd7aa332d4455d9d0ff6dd63715532b9a4bf07d904c9ba44c7f"} Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.760859 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.760868 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da396c4703061cd7aa332d4455d9d0ff6dd63715532b9a4bf07d904c9ba44c7f" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.767295 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.767275839 podStartE2EDuration="2.767275839s" podCreationTimestamp="2026-01-29 08:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:41:25.766005023 +0000 UTC m=+166.265592965" watchObservedRunningTime="2026-01-29 08:41:25.767275839 +0000 UTC m=+166.266863791" Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.950856 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:25 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:25 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:25 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:25 crc kubenswrapper[5031]: I0129 08:41:25.950960 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:26 crc kubenswrapper[5031]: I0129 08:41:26.474158 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:26 crc kubenswrapper[5031]: I0129 08:41:26.478945 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fmrqw" Jan 29 08:41:26 crc kubenswrapper[5031]: I0129 08:41:26.784718 5031 generic.go:334] "Generic (PLEG): container finished" podID="8381c2a9-1b66-49cc-99ed-766f70d3c55e" containerID="1c958f9c6d1d3f50c37ca13152585a899903d6e9e58db54d433a3a8496b3063e" exitCode=0 Jan 29 08:41:26 crc kubenswrapper[5031]: I0129 08:41:26.785583 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8381c2a9-1b66-49cc-99ed-766f70d3c55e","Type":"ContainerDied","Data":"1c958f9c6d1d3f50c37ca13152585a899903d6e9e58db54d433a3a8496b3063e"} Jan 29 08:41:26 crc kubenswrapper[5031]: I0129 08:41:26.948234 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:26 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:26 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:26 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:26 crc kubenswrapper[5031]: I0129 08:41:26.948320 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:27 crc kubenswrapper[5031]: I0129 08:41:27.952531 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:27 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:27 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:27 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:27 crc kubenswrapper[5031]: I0129 08:41:27.952821 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.268693 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.449912 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kubelet-dir\") pod \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.450096 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kube-api-access\") pod \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\" (UID: \"8381c2a9-1b66-49cc-99ed-766f70d3c55e\") " Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.450343 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.450507 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8381c2a9-1b66-49cc-99ed-766f70d3c55e" (UID: "8381c2a9-1b66-49cc-99ed-766f70d3c55e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.465561 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8381c2a9-1b66-49cc-99ed-766f70d3c55e" (UID: "8381c2a9-1b66-49cc-99ed-766f70d3c55e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.466007 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20a410c7-0476-4e62-9ee1-5fb6998f308f-metrics-certs\") pod \"network-metrics-daemon-wnmhx\" (UID: \"20a410c7-0476-4e62-9ee1-5fb6998f308f\") " pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.552068 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.552106 5031 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8381c2a9-1b66-49cc-99ed-766f70d3c55e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.606931 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wnmhx" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.809339 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8381c2a9-1b66-49cc-99ed-766f70d3c55e","Type":"ContainerDied","Data":"854a047c4fdd3cb69a232d14dc9de885dbac22bb5c88d25d6e32455673b952fa"} Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.809393 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="854a047c4fdd3cb69a232d14dc9de885dbac22bb5c88d25d6e32455673b952fa" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.809430 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.950379 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:28 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:28 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:28 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:28 crc kubenswrapper[5031]: I0129 08:41:28.950452 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:29 crc kubenswrapper[5031]: I0129 08:41:29.948208 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:29 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:29 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:29 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:29 crc kubenswrapper[5031]: I0129 08:41:29.948266 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:30 crc kubenswrapper[5031]: I0129 08:41:30.947260 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:30 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:30 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:30 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:30 crc kubenswrapper[5031]: I0129 08:41:30.947693 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.295344 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.295419 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.295490 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.295538 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.590950 5031 patch_prober.go:28] interesting pod/console-f9d7485db-lbjm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.591307 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lbjm4" podUID="f07acf69-4876-413e-b098-b7074c7018c2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.949282 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:41:31 crc kubenswrapper[5031]: [-]has-synced failed: reason withheld Jan 29 08:41:31 crc kubenswrapper[5031]: [+]process-running ok Jan 29 08:41:31 crc kubenswrapper[5031]: healthz check failed Jan 29 08:41:31 crc kubenswrapper[5031]: I0129 08:41:31.949406 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:41:32 crc kubenswrapper[5031]: I0129 08:41:32.951383 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:32 crc kubenswrapper[5031]: I0129 08:41:32.955837 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4v677" Jan 29 08:41:38 crc kubenswrapper[5031]: I0129 08:41:38.493462 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:41:38 crc kubenswrapper[5031]: I0129 08:41:38.494051 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:41:38 crc kubenswrapper[5031]: I0129 08:41:38.534936 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:40 crc kubenswrapper[5031]: I0129 08:41:40.326006 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:41:40 crc kubenswrapper[5031]: I0129 08:41:40.800483 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wnmhx"] Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296096 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296150 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296113 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296221 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296252 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296819 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"52e7744a971ee6b75ad4411046904af9aac4e439b7692e733799497f912cd99c"} pod="openshift-console/downloads-7954f5f757-sp9n7" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296897 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" containerID="cri-o://52e7744a971ee6b75ad4411046904af9aac4e439b7692e733799497f912cd99c" gracePeriod=2 Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296964 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.296986 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.594162 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.597815 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.910635 5031 generic.go:334] "Generic (PLEG): container finished" podID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerID="52e7744a971ee6b75ad4411046904af9aac4e439b7692e733799497f912cd99c" exitCode=0 Jan 29 08:41:41 crc kubenswrapper[5031]: I0129 08:41:41.910721 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sp9n7" event={"ID":"5f4e6cea-65e3-446f-9925-d63d00fc235f","Type":"ContainerDied","Data":"52e7744a971ee6b75ad4411046904af9aac4e439b7692e733799497f912cd99c"} Jan 29 08:41:51 crc kubenswrapper[5031]: I0129 08:41:51.296060 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:41:51 crc kubenswrapper[5031]: I0129 08:41:51.296829 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:41:52 crc kubenswrapper[5031]: I0129 08:41:52.007192 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" Jan 29 08:41:55 crc kubenswrapper[5031]: E0129 08:41:55.412206 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 08:41:55 crc kubenswrapper[5031]: E0129 08:41:55.412403 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-brb4j_openshift-marketplace(5dd13a7c-9e64-425b-b358-7e6657fa32ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:41:55 crc kubenswrapper[5031]: E0129 08:41:55.413549 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-brb4j" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.690645 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 08:41:59 crc kubenswrapper[5031]: E0129 08:41:59.691245 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8381c2a9-1b66-49cc-99ed-766f70d3c55e" containerName="pruner" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.691260 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8381c2a9-1b66-49cc-99ed-766f70d3c55e" containerName="pruner" Jan 29 08:41:59 crc kubenswrapper[5031]: E0129 08:41:59.691275 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b17e33-c350-4d10-b649-12e737846100" containerName="pruner" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.691283 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b17e33-c350-4d10-b649-12e737846100" containerName="pruner" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.691413 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8381c2a9-1b66-49cc-99ed-766f70d3c55e" containerName="pruner" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.691432 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b17e33-c350-4d10-b649-12e737846100" containerName="pruner" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.691937 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.697558 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.698003 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.701554 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.836565 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.836637 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.937946 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.938006 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.938111 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:41:59 crc kubenswrapper[5031]: I0129 08:41:59.958429 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:42:00 crc kubenswrapper[5031]: I0129 08:42:00.065487 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:42:01 crc kubenswrapper[5031]: I0129 08:42:01.296562 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:42:01 crc kubenswrapper[5031]: I0129 08:42:01.296959 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:42:02 crc kubenswrapper[5031]: E0129 08:42:02.675521 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 08:42:02 crc kubenswrapper[5031]: E0129 08:42:02.676164 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lzg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-627gc_openshift-marketplace(dd2c0807-7bcf-435a-8961-fdef958e6c53): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:42:02 crc kubenswrapper[5031]: E0129 08:42:02.677749 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-627gc" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.686759 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.688225 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.698097 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.798013 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a29a62-f408-4268-8e7b-ac409fb04a2b-kube-api-access\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.798090 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-var-lock\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.798263 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.899212 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.899304 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a29a62-f408-4268-8e7b-ac409fb04a2b-kube-api-access\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.899383 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-var-lock\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.899466 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-var-lock\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.899447 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:04 crc kubenswrapper[5031]: I0129 08:42:04.919568 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a29a62-f408-4268-8e7b-ac409fb04a2b-kube-api-access\") pod \"installer-9-crc\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:05 crc kubenswrapper[5031]: I0129 08:42:05.012809 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:08 crc kubenswrapper[5031]: I0129 08:42:08.493830 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:42:08 crc kubenswrapper[5031]: I0129 08:42:08.494191 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:42:08 crc kubenswrapper[5031]: I0129 08:42:08.494290 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:42:08 crc kubenswrapper[5031]: I0129 08:42:08.495126 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:42:08 crc kubenswrapper[5031]: I0129 08:42:08.495201 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a" gracePeriod=600 Jan 29 08:42:09 crc kubenswrapper[5031]: I0129 08:42:09.071245 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" event={"ID":"20a410c7-0476-4e62-9ee1-5fb6998f308f","Type":"ContainerStarted","Data":"d446f008245a467fb0f614d180e43679a6f70f2642e4c509ee37968350cbf6e7"} Jan 29 08:42:09 crc kubenswrapper[5031]: E0129 08:42:09.653343 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 08:42:09 crc kubenswrapper[5031]: E0129 08:42:09.653612 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vjsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dflqz_openshift-marketplace(1fe2f9cf-9f00-48da-849a-29aa4b0e66ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:42:09 crc kubenswrapper[5031]: E0129 08:42:09.654821 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dflqz" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" Jan 29 08:42:10 crc kubenswrapper[5031]: I0129 08:42:10.078004 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a" exitCode=0 Jan 29 08:42:10 crc kubenswrapper[5031]: I0129 08:42:10.078043 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a"} Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.105977 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-627gc" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.108144 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dflqz" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.185473 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.185657 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55g95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5f9r7_openshift-marketplace(c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.188898 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5f9r7" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.248276 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.248726 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7222,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8gmmw_openshift-marketplace(4ecedf13-919d-482a-bfa7-71e66368c9ef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.249903 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8gmmw" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" Jan 29 08:42:11 crc kubenswrapper[5031]: I0129 08:42:11.298325 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:42:11 crc kubenswrapper[5031]: I0129 08:42:11.298377 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.302539 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.302664 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqbdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m9hg9_openshift-marketplace(ad4a529c-a8ab-47c5-84cd-44002bebb7ce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.303775 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.303783 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m9hg9" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.303851 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k8fv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bhkg4_openshift-marketplace(b6115352-f309-492c-a7d9-c36ddb9e2454): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.305031 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bhkg4" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.351848 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.352030 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn4gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-59md2_openshift-marketplace(0c7d881b-8764-42f1-a4db-87cde90a3a70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:42:11 crc kubenswrapper[5031]: E0129 08:42:11.353208 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-59md2" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" Jan 29 08:42:11 crc kubenswrapper[5031]: I0129 08:42:11.561049 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 08:42:11 crc kubenswrapper[5031]: I0129 08:42:11.657481 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 08:42:11 crc kubenswrapper[5031]: W0129 08:42:11.667672 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod01a29a62_f408_4268_8e7b_ac409fb04a2b.slice/crio-f1d202a1b74cb2dfa4346ec507b6395ac4b85311b1fbd2891a5e0a333d3206fd WatchSource:0}: Error finding container f1d202a1b74cb2dfa4346ec507b6395ac4b85311b1fbd2891a5e0a333d3206fd: Status 404 returned error can't find the container with id f1d202a1b74cb2dfa4346ec507b6395ac4b85311b1fbd2891a5e0a333d3206fd Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.089156 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01a29a62-f408-4268-8e7b-ac409fb04a2b","Type":"ContainerStarted","Data":"d90d0333d8f24316930e7e9ed59915b6b0273c6aaa80c810f12d53292a15e2e8"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.089755 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01a29a62-f408-4268-8e7b-ac409fb04a2b","Type":"ContainerStarted","Data":"f1d202a1b74cb2dfa4346ec507b6395ac4b85311b1fbd2891a5e0a333d3206fd"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.090259 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"98e3a425-69cd-4b4f-9792-6fb22bdb438e","Type":"ContainerStarted","Data":"727dcea8301b210e98d03f952f2a41a7e99e9a0c96640a2590632bd950be2fc3"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.090293 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"98e3a425-69cd-4b4f-9792-6fb22bdb438e","Type":"ContainerStarted","Data":"bb45b851afe94459ffa7785c44982705752210e1d87d3f6c649375a55a3f64ee"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.092117 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sp9n7" event={"ID":"5f4e6cea-65e3-446f-9925-d63d00fc235f","Type":"ContainerStarted","Data":"c166b65051512c424b92b617ea99080a7268c7b7d3aae2da4f1d5c78a4154dc5"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.092330 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.092846 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.092896 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.094238 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brb4j" event={"ID":"5dd13a7c-9e64-425b-b358-7e6657fa32ab","Type":"ContainerStarted","Data":"183620a31c624c495b5976917a7cc637366e091be5d8f050e1d44759045bf532"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.096045 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"a6cb656f7dd9fa337f6f10631a03e5fbb542392a52bba086f8928db8a33aaccb"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.097543 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" event={"ID":"20a410c7-0476-4e62-9ee1-5fb6998f308f","Type":"ContainerStarted","Data":"c490604e01cc0778807ec78cc0953200cc616307e5ed7d674603cad96748d71e"} Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.097570 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wnmhx" event={"ID":"20a410c7-0476-4e62-9ee1-5fb6998f308f","Type":"ContainerStarted","Data":"5ff13f786449733070182063f55c1412801cc5627f6df436af8494c1aa5ca5a2"} Jan 29 08:42:12 crc kubenswrapper[5031]: E0129 08:42:12.098626 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8gmmw" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" Jan 29 08:42:12 crc kubenswrapper[5031]: E0129 08:42:12.098911 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m9hg9" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" Jan 29 08:42:12 crc kubenswrapper[5031]: E0129 08:42:12.099016 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bhkg4" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" Jan 29 08:42:12 crc kubenswrapper[5031]: E0129 08:42:12.099092 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-59md2" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" Jan 29 08:42:12 crc kubenswrapper[5031]: E0129 08:42:12.100463 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5f9r7" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.118182 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=8.118166946 podStartE2EDuration="8.118166946s" podCreationTimestamp="2026-01-29 08:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:12.114753033 +0000 UTC m=+212.614340985" watchObservedRunningTime="2026-01-29 08:42:12.118166946 +0000 UTC m=+212.617754898" Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.129252 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wnmhx" podStartSLOduration=186.129236358 podStartE2EDuration="3m6.129236358s" podCreationTimestamp="2026-01-29 08:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:12.127040959 +0000 UTC m=+212.626628911" watchObservedRunningTime="2026-01-29 08:42:12.129236358 +0000 UTC m=+212.628824310" Jan 29 08:42:12 crc kubenswrapper[5031]: I0129 08:42:12.273528 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=13.273510481 podStartE2EDuration="13.273510481s" podCreationTimestamp="2026-01-29 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:12.269928414 +0000 UTC m=+212.769516366" watchObservedRunningTime="2026-01-29 08:42:12.273510481 +0000 UTC m=+212.773098433" Jan 29 08:42:13 crc kubenswrapper[5031]: I0129 08:42:13.104706 5031 generic.go:334] "Generic (PLEG): container finished" podID="98e3a425-69cd-4b4f-9792-6fb22bdb438e" containerID="727dcea8301b210e98d03f952f2a41a7e99e9a0c96640a2590632bd950be2fc3" exitCode=0 Jan 29 08:42:13 crc kubenswrapper[5031]: I0129 08:42:13.104813 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"98e3a425-69cd-4b4f-9792-6fb22bdb438e","Type":"ContainerDied","Data":"727dcea8301b210e98d03f952f2a41a7e99e9a0c96640a2590632bd950be2fc3"} Jan 29 08:42:13 crc kubenswrapper[5031]: I0129 08:42:13.107586 5031 generic.go:334] "Generic (PLEG): container finished" podID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerID="183620a31c624c495b5976917a7cc637366e091be5d8f050e1d44759045bf532" exitCode=0 Jan 29 08:42:13 crc kubenswrapper[5031]: I0129 08:42:13.107680 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brb4j" event={"ID":"5dd13a7c-9e64-425b-b358-7e6657fa32ab","Type":"ContainerDied","Data":"183620a31c624c495b5976917a7cc637366e091be5d8f050e1d44759045bf532"} Jan 29 08:42:13 crc kubenswrapper[5031]: I0129 08:42:13.109211 5031 patch_prober.go:28] interesting pod/downloads-7954f5f757-sp9n7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 29 08:42:13 crc kubenswrapper[5031]: I0129 08:42:13.109246 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sp9n7" podUID="5f4e6cea-65e3-446f-9925-d63d00fc235f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.119278 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brb4j" event={"ID":"5dd13a7c-9e64-425b-b358-7e6657fa32ab","Type":"ContainerStarted","Data":"a44ed365f069ef6d15ddf58f1e0a2b80e8c0f91bc5e8d6c581a1bfaf259730ce"} Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.141559 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brb4j" podStartSLOduration=4.093791274 podStartE2EDuration="56.141539453s" podCreationTimestamp="2026-01-29 08:41:18 +0000 UTC" firstStartedPulling="2026-01-29 08:41:21.562091454 +0000 UTC m=+162.061679406" lastFinishedPulling="2026-01-29 08:42:13.609839633 +0000 UTC m=+214.109427585" observedRunningTime="2026-01-29 08:42:14.139708033 +0000 UTC m=+214.639295995" watchObservedRunningTime="2026-01-29 08:42:14.141539453 +0000 UTC m=+214.641127405" Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.405862 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.522398 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kube-api-access\") pod \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.522521 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kubelet-dir\") pod \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\" (UID: \"98e3a425-69cd-4b4f-9792-6fb22bdb438e\") " Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.522760 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98e3a425-69cd-4b4f-9792-6fb22bdb438e" (UID: "98e3a425-69cd-4b4f-9792-6fb22bdb438e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.529152 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98e3a425-69cd-4b4f-9792-6fb22bdb438e" (UID: "98e3a425-69cd-4b4f-9792-6fb22bdb438e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.623572 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:14 crc kubenswrapper[5031]: I0129 08:42:14.623625 5031 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e3a425-69cd-4b4f-9792-6fb22bdb438e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:15 crc kubenswrapper[5031]: I0129 08:42:15.126212 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:42:15 crc kubenswrapper[5031]: I0129 08:42:15.126206 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"98e3a425-69cd-4b4f-9792-6fb22bdb438e","Type":"ContainerDied","Data":"bb45b851afe94459ffa7785c44982705752210e1d87d3f6c649375a55a3f64ee"} Jan 29 08:42:15 crc kubenswrapper[5031]: I0129 08:42:15.127069 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb45b851afe94459ffa7785c44982705752210e1d87d3f6c649375a55a3f64ee" Jan 29 08:42:18 crc kubenswrapper[5031]: I0129 08:42:18.564975 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rjzm6"] Jan 29 08:42:19 crc kubenswrapper[5031]: I0129 08:42:19.108316 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:42:19 crc kubenswrapper[5031]: I0129 08:42:19.108382 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:42:19 crc kubenswrapper[5031]: I0129 08:42:19.630558 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:42:19 crc kubenswrapper[5031]: I0129 08:42:19.684779 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:42:20 crc kubenswrapper[5031]: I0129 08:42:20.365314 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brb4j"] Jan 29 08:42:21 crc kubenswrapper[5031]: I0129 08:42:21.199092 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brb4j" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="registry-server" containerID="cri-o://a44ed365f069ef6d15ddf58f1e0a2b80e8c0f91bc5e8d6c581a1bfaf259730ce" gracePeriod=2 Jan 29 08:42:21 crc kubenswrapper[5031]: I0129 08:42:21.321208 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sp9n7" Jan 29 08:42:21 crc kubenswrapper[5031]: E0129 08:42:21.963393 5031 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd13a7c_9e64_425b_b358_7e6657fa32ab.slice/crio-a44ed365f069ef6d15ddf58f1e0a2b80e8c0f91bc5e8d6c581a1bfaf259730ce.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.211771 5031 generic.go:334] "Generic (PLEG): container finished" podID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerID="a44ed365f069ef6d15ddf58f1e0a2b80e8c0f91bc5e8d6c581a1bfaf259730ce" exitCode=0 Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.211952 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brb4j" event={"ID":"5dd13a7c-9e64-425b-b358-7e6657fa32ab","Type":"ContainerDied","Data":"a44ed365f069ef6d15ddf58f1e0a2b80e8c0f91bc5e8d6c581a1bfaf259730ce"} Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.308032 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.331174 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-utilities\") pod \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.331301 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-catalog-content\") pod \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.331327 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzqp5\" (UniqueName: \"kubernetes.io/projected/5dd13a7c-9e64-425b-b358-7e6657fa32ab-kube-api-access-mzqp5\") pod \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\" (UID: \"5dd13a7c-9e64-425b-b358-7e6657fa32ab\") " Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.332061 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-utilities" (OuterVolumeSpecName: "utilities") pod "5dd13a7c-9e64-425b-b358-7e6657fa32ab" (UID: "5dd13a7c-9e64-425b-b358-7e6657fa32ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.342779 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd13a7c-9e64-425b-b358-7e6657fa32ab-kube-api-access-mzqp5" (OuterVolumeSpecName: "kube-api-access-mzqp5") pod "5dd13a7c-9e64-425b-b358-7e6657fa32ab" (UID: "5dd13a7c-9e64-425b-b358-7e6657fa32ab"). InnerVolumeSpecName "kube-api-access-mzqp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.391883 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dd13a7c-9e64-425b-b358-7e6657fa32ab" (UID: "5dd13a7c-9e64-425b-b358-7e6657fa32ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.431903 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.431937 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzqp5\" (UniqueName: \"kubernetes.io/projected/5dd13a7c-9e64-425b-b358-7e6657fa32ab-kube-api-access-mzqp5\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:23 crc kubenswrapper[5031]: I0129 08:42:23.431950 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd13a7c-9e64-425b-b358-7e6657fa32ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.220482 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerID="769fad034c616df288b92cbe18e36914a2ee51fc869337ab7bd252a7512be42d" exitCode=0 Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.220521 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hg9" event={"ID":"ad4a529c-a8ab-47c5-84cd-44002bebb7ce","Type":"ContainerDied","Data":"769fad034c616df288b92cbe18e36914a2ee51fc869337ab7bd252a7512be42d"} Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.224284 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brb4j" event={"ID":"5dd13a7c-9e64-425b-b358-7e6657fa32ab","Type":"ContainerDied","Data":"248d153bc7f4d4335234ecfe50075f01480a57cb34cb6fedb46484f33b94f5da"} Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.224331 5031 scope.go:117] "RemoveContainer" containerID="a44ed365f069ef6d15ddf58f1e0a2b80e8c0f91bc5e8d6c581a1bfaf259730ce" Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.224390 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brb4j" Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.283125 5031 scope.go:117] "RemoveContainer" containerID="183620a31c624c495b5976917a7cc637366e091be5d8f050e1d44759045bf532" Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.321325 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brb4j"] Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.325059 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brb4j"] Jan 29 08:42:24 crc kubenswrapper[5031]: I0129 08:42:24.331783 5031 scope.go:117] "RemoveContainer" containerID="1eedcdb42873991b1c2e7d87f58d12786027e63e5785ba5777ac777207c2f273" Jan 29 08:42:25 crc kubenswrapper[5031]: I0129 08:42:25.233202 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerStarted","Data":"b9b10315bd2a7338e7ca30b9cd6742ec86369d1b5fd95ea6b2a0dea0c4f662ff"} Jan 29 08:42:25 crc kubenswrapper[5031]: I0129 08:42:25.236383 5031 generic.go:334] "Generic (PLEG): container finished" podID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerID="17a309a531deedda1b69c3016d37232c4597ee53fd9f42d349e3040d8ac31447" exitCode=0 Jan 29 08:42:25 crc kubenswrapper[5031]: I0129 08:42:25.236391 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gmmw" event={"ID":"4ecedf13-919d-482a-bfa7-71e66368c9ef","Type":"ContainerDied","Data":"17a309a531deedda1b69c3016d37232c4597ee53fd9f42d349e3040d8ac31447"} Jan 29 08:42:25 crc kubenswrapper[5031]: I0129 08:42:25.243859 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkg4" event={"ID":"b6115352-f309-492c-a7d9-c36ddb9e2454","Type":"ContainerStarted","Data":"194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65"} Jan 29 08:42:26 crc kubenswrapper[5031]: I0129 08:42:26.249524 5031 generic.go:334] "Generic (PLEG): container finished" podID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerID="b9b10315bd2a7338e7ca30b9cd6742ec86369d1b5fd95ea6b2a0dea0c4f662ff" exitCode=0 Jan 29 08:42:26 crc kubenswrapper[5031]: I0129 08:42:26.249582 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerDied","Data":"b9b10315bd2a7338e7ca30b9cd6742ec86369d1b5fd95ea6b2a0dea0c4f662ff"} Jan 29 08:42:26 crc kubenswrapper[5031]: I0129 08:42:26.252483 5031 generic.go:334] "Generic (PLEG): container finished" podID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerID="194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65" exitCode=0 Jan 29 08:42:26 crc kubenswrapper[5031]: I0129 08:42:26.252526 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkg4" event={"ID":"b6115352-f309-492c-a7d9-c36ddb9e2454","Type":"ContainerDied","Data":"194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65"} Jan 29 08:42:26 crc kubenswrapper[5031]: I0129 08:42:26.306646 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" path="/var/lib/kubelet/pods/5dd13a7c-9e64-425b-b358-7e6657fa32ab/volumes" Jan 29 08:42:38 crc kubenswrapper[5031]: I0129 08:42:38.334387 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerStarted","Data":"8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58"} Jan 29 08:42:38 crc kubenswrapper[5031]: I0129 08:42:38.341275 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gmmw" event={"ID":"4ecedf13-919d-482a-bfa7-71e66368c9ef","Type":"ContainerStarted","Data":"1ac1f65e25b1cfd44fb2f006c6e2841be65a87de39c1c6fdcb84a0cdc795f2b1"} Jan 29 08:42:38 crc kubenswrapper[5031]: I0129 08:42:38.350753 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59md2" event={"ID":"0c7d881b-8764-42f1-a4db-87cde90a3a70","Type":"ContainerStarted","Data":"8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630"} Jan 29 08:42:38 crc kubenswrapper[5031]: I0129 08:42:38.361675 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hg9" event={"ID":"ad4a529c-a8ab-47c5-84cd-44002bebb7ce","Type":"ContainerStarted","Data":"31c7b22294bc0e63cbd99f735a6fd8ff6b8e792b1d9219e202aec6489a751de4"} Jan 29 08:42:38 crc kubenswrapper[5031]: I0129 08:42:38.370609 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerStarted","Data":"465b0621d9456cf54c5d343743066e0a78ef8efc898c7284558d4b1a216daa9e"} Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.394272 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-627gc" event={"ID":"dd2c0807-7bcf-435a-8961-fdef958e6c53","Type":"ContainerStarted","Data":"beedc2dab719895280630153be970a6a1bb772d6bc677c6035175a7374387226"} Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.397922 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkg4" event={"ID":"b6115352-f309-492c-a7d9-c36ddb9e2454","Type":"ContainerStarted","Data":"b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9"} Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.401667 5031 generic.go:334] "Generic (PLEG): container finished" podID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerID="8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58" exitCode=0 Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.401809 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerDied","Data":"8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58"} Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.421346 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9hg9" podStartSLOduration=8.974322585 podStartE2EDuration="1m19.421329903s" podCreationTimestamp="2026-01-29 08:41:20 +0000 UTC" firstStartedPulling="2026-01-29 08:41:21.54769296 +0000 UTC m=+162.047280922" lastFinishedPulling="2026-01-29 08:42:31.994700288 +0000 UTC m=+232.494288240" observedRunningTime="2026-01-29 08:42:38.385402302 +0000 UTC m=+238.884990274" watchObservedRunningTime="2026-01-29 08:42:39.421329903 +0000 UTC m=+239.920917855" Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.465401 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dflqz" podStartSLOduration=5.345307448 podStartE2EDuration="1m21.465376297s" podCreationTimestamp="2026-01-29 08:41:18 +0000 UTC" firstStartedPulling="2026-01-29 08:41:21.593934244 +0000 UTC m=+162.093522186" lastFinishedPulling="2026-01-29 08:42:37.714003083 +0000 UTC m=+238.213591035" observedRunningTime="2026-01-29 08:42:39.463131545 +0000 UTC m=+239.962719507" watchObservedRunningTime="2026-01-29 08:42:39.465376297 +0000 UTC m=+239.964964259" Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.526645 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhkg4" podStartSLOduration=5.279130157 podStartE2EDuration="1m21.526627001s" podCreationTimestamp="2026-01-29 08:41:18 +0000 UTC" firstStartedPulling="2026-01-29 08:41:21.482197518 +0000 UTC m=+161.981785470" lastFinishedPulling="2026-01-29 08:42:37.729694362 +0000 UTC m=+238.229282314" observedRunningTime="2026-01-29 08:42:39.510196552 +0000 UTC m=+240.009784504" watchObservedRunningTime="2026-01-29 08:42:39.526627001 +0000 UTC m=+240.026214953" Jan 29 08:42:39 crc kubenswrapper[5031]: I0129 08:42:39.527205 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8gmmw" podStartSLOduration=4.575912202 podStartE2EDuration="1m19.527200716s" podCreationTimestamp="2026-01-29 08:41:20 +0000 UTC" firstStartedPulling="2026-01-29 08:41:22.587593153 +0000 UTC m=+163.087181105" lastFinishedPulling="2026-01-29 08:42:37.538881637 +0000 UTC m=+238.038469619" observedRunningTime="2026-01-29 08:42:39.523768953 +0000 UTC m=+240.023356915" watchObservedRunningTime="2026-01-29 08:42:39.527200716 +0000 UTC m=+240.026788668" Jan 29 08:42:40 crc kubenswrapper[5031]: I0129 08:42:40.707783 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:42:40 crc kubenswrapper[5031]: I0129 08:42:40.708141 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:42:40 crc kubenswrapper[5031]: I0129 08:42:40.756001 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.145409 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.145456 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.188208 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.425985 5031 generic.go:334] "Generic (PLEG): container finished" podID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerID="8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630" exitCode=0 Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.426046 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59md2" event={"ID":"0c7d881b-8764-42f1-a4db-87cde90a3a70","Type":"ContainerDied","Data":"8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630"} Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.430298 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerStarted","Data":"b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec"} Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.431913 5031 generic.go:334] "Generic (PLEG): container finished" podID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerID="beedc2dab719895280630153be970a6a1bb772d6bc677c6035175a7374387226" exitCode=0 Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.431993 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-627gc" event={"ID":"dd2c0807-7bcf-435a-8961-fdef958e6c53","Type":"ContainerDied","Data":"beedc2dab719895280630153be970a6a1bb772d6bc677c6035175a7374387226"} Jan 29 08:42:41 crc kubenswrapper[5031]: I0129 08:42:41.472922 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5f9r7" podStartSLOduration=5.100413226 podStartE2EDuration="1m23.472906281s" podCreationTimestamp="2026-01-29 08:41:18 +0000 UTC" firstStartedPulling="2026-01-29 08:41:21.473269248 +0000 UTC m=+161.972857200" lastFinishedPulling="2026-01-29 08:42:39.845762303 +0000 UTC m=+240.345350255" observedRunningTime="2026-01-29 08:42:41.469154319 +0000 UTC m=+241.968742281" watchObservedRunningTime="2026-01-29 08:42:41.472906281 +0000 UTC m=+241.972494233" Jan 29 08:42:43 crc kubenswrapper[5031]: I0129 08:42:43.446770 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-627gc" event={"ID":"dd2c0807-7bcf-435a-8961-fdef958e6c53","Type":"ContainerStarted","Data":"5abc398bc8b1311e459ee44497f35a956c858c07b13e3bfe0aadba53c8fb58cd"} Jan 29 08:42:43 crc kubenswrapper[5031]: I0129 08:42:43.450270 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59md2" event={"ID":"0c7d881b-8764-42f1-a4db-87cde90a3a70","Type":"ContainerStarted","Data":"12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d"} Jan 29 08:42:43 crc kubenswrapper[5031]: I0129 08:42:43.467248 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-627gc" podStartSLOduration=4.239912748 podStartE2EDuration="1m22.467230355s" podCreationTimestamp="2026-01-29 08:41:21 +0000 UTC" firstStartedPulling="2026-01-29 08:41:24.703321053 +0000 UTC m=+165.202909005" lastFinishedPulling="2026-01-29 08:42:42.93063866 +0000 UTC m=+243.430226612" observedRunningTime="2026-01-29 08:42:43.465902328 +0000 UTC m=+243.965490280" watchObservedRunningTime="2026-01-29 08:42:43.467230355 +0000 UTC m=+243.966818297" Jan 29 08:42:43 crc kubenswrapper[5031]: I0129 08:42:43.487953 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-59md2" podStartSLOduration=3.285352866 podStartE2EDuration="1m21.487934331s" podCreationTimestamp="2026-01-29 08:41:22 +0000 UTC" firstStartedPulling="2026-01-29 08:41:24.711201134 +0000 UTC m=+165.210789086" lastFinishedPulling="2026-01-29 08:42:42.913782599 +0000 UTC m=+243.413370551" observedRunningTime="2026-01-29 08:42:43.482251205 +0000 UTC m=+243.981839157" watchObservedRunningTime="2026-01-29 08:42:43.487934331 +0000 UTC m=+243.987522273" Jan 29 08:42:43 crc kubenswrapper[5031]: I0129 08:42:43.591142 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" podUID="9e7bbdcb-3270-42af-bda0-e6bebab732a2" containerName="oauth-openshift" containerID="cri-o://2f7c016f3f9f8148db2fd797e19cae1f39380507e34b1b0f60bf875ed078c620" gracePeriod=15 Jan 29 08:42:44 crc kubenswrapper[5031]: I0129 08:42:44.462626 5031 generic.go:334] "Generic (PLEG): container finished" podID="9e7bbdcb-3270-42af-bda0-e6bebab732a2" containerID="2f7c016f3f9f8148db2fd797e19cae1f39380507e34b1b0f60bf875ed078c620" exitCode=0 Jan 29 08:42:44 crc kubenswrapper[5031]: I0129 08:42:44.463004 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" event={"ID":"9e7bbdcb-3270-42af-bda0-e6bebab732a2","Type":"ContainerDied","Data":"2f7c016f3f9f8148db2fd797e19cae1f39380507e34b1b0f60bf875ed078c620"} Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.262523 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300172 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68755f559b-bdbfh"] Jan 29 08:42:45 crc kubenswrapper[5031]: E0129 08:42:45.300407 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="extract-utilities" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300420 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="extract-utilities" Jan 29 08:42:45 crc kubenswrapper[5031]: E0129 08:42:45.300429 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e3a425-69cd-4b4f-9792-6fb22bdb438e" containerName="pruner" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300435 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e3a425-69cd-4b4f-9792-6fb22bdb438e" containerName="pruner" Jan 29 08:42:45 crc kubenswrapper[5031]: E0129 08:42:45.300445 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7bbdcb-3270-42af-bda0-e6bebab732a2" containerName="oauth-openshift" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300451 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7bbdcb-3270-42af-bda0-e6bebab732a2" containerName="oauth-openshift" Jan 29 08:42:45 crc kubenswrapper[5031]: E0129 08:42:45.300464 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="extract-content" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300470 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="extract-content" Jan 29 08:42:45 crc kubenswrapper[5031]: E0129 08:42:45.300480 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="registry-server" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300486 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="registry-server" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300569 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd13a7c-9e64-425b-b358-7e6657fa32ab" containerName="registry-server" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300579 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e3a425-69cd-4b4f-9792-6fb22bdb438e" containerName="pruner" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300592 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7bbdcb-3270-42af-bda0-e6bebab732a2" containerName="oauth-openshift" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.300968 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.316660 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68755f559b-bdbfh"] Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.456851 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-serving-cert\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457182 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-error\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457226 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-dir\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457281 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-idp-0-file-data\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457323 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-provider-selection\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457345 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-cliconfig\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457413 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-service-ca\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457431 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-policies\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457426 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457561 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-trusted-ca-bundle\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457596 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-login\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457623 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-router-certs\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457670 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-ocp-branding-template\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457703 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-session\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457734 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mjtm\" (UniqueName: \"kubernetes.io/projected/9e7bbdcb-3270-42af-bda0-e6bebab732a2-kube-api-access-9mjtm\") pod \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\" (UID: \"9e7bbdcb-3270-42af-bda0-e6bebab732a2\") " Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.457928 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhk4\" (UniqueName: \"kubernetes.io/projected/6b3ee7c1-de61-421e-92f8-f449a3abf675-kube-api-access-hlhk4\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.458445 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.458469 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.458495 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.458507 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.458695 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-error\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.458995 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-login\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459059 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b3ee7c1-de61-421e-92f8-f449a3abf675-audit-dir\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459082 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459150 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-audit-policies\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459511 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-service-ca\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459543 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-session\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459748 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459783 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459860 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.459927 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.460009 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.460529 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-router-certs\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.460617 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.460629 5031 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.460638 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.460680 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.460694 5031 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e7bbdcb-3270-42af-bda0-e6bebab732a2-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.462710 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.463097 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.463700 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7bbdcb-3270-42af-bda0-e6bebab732a2-kube-api-access-9mjtm" (OuterVolumeSpecName: "kube-api-access-9mjtm") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "kube-api-access-9mjtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.464326 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.464830 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.465042 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.465743 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.471404 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.472596 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" event={"ID":"9e7bbdcb-3270-42af-bda0-e6bebab732a2","Type":"ContainerDied","Data":"c4c0f738913e593f7cdd2224755ba0689bc706efb7b6caa0ca0560e948f79c1c"} Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.472664 5031 scope.go:117] "RemoveContainer" containerID="2f7c016f3f9f8148db2fd797e19cae1f39380507e34b1b0f60bf875ed078c620" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.472883 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rjzm6" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.487042 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9e7bbdcb-3270-42af-bda0-e6bebab732a2" (UID: "9e7bbdcb-3270-42af-bda0-e6bebab732a2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.561741 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-router-certs\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.561995 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhk4\" (UniqueName: \"kubernetes.io/projected/6b3ee7c1-de61-421e-92f8-f449a3abf675-kube-api-access-hlhk4\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562086 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-error\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562184 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-login\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562277 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b3ee7c1-de61-421e-92f8-f449a3abf675-audit-dir\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562387 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562538 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-audit-policies\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562675 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-service-ca\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562408 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b3ee7c1-de61-421e-92f8-f449a3abf675-audit-dir\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562757 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-session\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562876 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562953 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.562993 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563029 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563076 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563193 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563215 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563233 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563246 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563260 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mjtm\" (UniqueName: \"kubernetes.io/projected/9e7bbdcb-3270-42af-bda0-e6bebab732a2-kube-api-access-9mjtm\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563272 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563285 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563298 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.563310 5031 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e7bbdcb-3270-42af-bda0-e6bebab732a2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.567338 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-login\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.567751 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-audit-policies\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.567836 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-service-ca\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.567966 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-router-certs\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.568082 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.568220 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-error\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.570808 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.571049 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.571138 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-session\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.571797 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.573009 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.576041 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b3ee7c1-de61-421e-92f8-f449a3abf675-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.579258 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhk4\" (UniqueName: \"kubernetes.io/projected/6b3ee7c1-de61-421e-92f8-f449a3abf675-kube-api-access-hlhk4\") pod \"oauth-openshift-68755f559b-bdbfh\" (UID: \"6b3ee7c1-de61-421e-92f8-f449a3abf675\") " pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.618954 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.801689 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rjzm6"] Jan 29 08:42:45 crc kubenswrapper[5031]: I0129 08:42:45.807105 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rjzm6"] Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.037584 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68755f559b-bdbfh"] Jan 29 08:42:46 crc kubenswrapper[5031]: W0129 08:42:46.045576 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b3ee7c1_de61_421e_92f8_f449a3abf675.slice/crio-b91daf6b52c6a6d06e883d8c72dc26b55a3a4fdbac0c8f42efcf126fbc9e9587 WatchSource:0}: Error finding container b91daf6b52c6a6d06e883d8c72dc26b55a3a4fdbac0c8f42efcf126fbc9e9587: Status 404 returned error can't find the container with id b91daf6b52c6a6d06e883d8c72dc26b55a3a4fdbac0c8f42efcf126fbc9e9587 Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.289749 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7bbdcb-3270-42af-bda0-e6bebab732a2" path="/var/lib/kubelet/pods/9e7bbdcb-3270-42af-bda0-e6bebab732a2/volumes" Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.479959 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" event={"ID":"6b3ee7c1-de61-421e-92f8-f449a3abf675","Type":"ContainerStarted","Data":"3e279a2f24c28f626c7828e56dbe09fbca103940c7a3523e4c01a6868c15ff4b"} Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.480027 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" event={"ID":"6b3ee7c1-de61-421e-92f8-f449a3abf675","Type":"ContainerStarted","Data":"b91daf6b52c6a6d06e883d8c72dc26b55a3a4fdbac0c8f42efcf126fbc9e9587"} Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.480210 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.481865 5031 patch_prober.go:28] interesting pod/oauth-openshift-68755f559b-bdbfh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.481937 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" podUID="6b3ee7c1-de61-421e-92f8-f449a3abf675" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Jan 29 08:42:46 crc kubenswrapper[5031]: I0129 08:42:46.510418 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" podStartSLOduration=28.510389572 podStartE2EDuration="28.510389572s" podCreationTimestamp="2026-01-29 08:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:46.505803386 +0000 UTC m=+247.005391338" watchObservedRunningTime="2026-01-29 08:42:46.510389572 +0000 UTC m=+247.009977524" Jan 29 08:42:47 crc kubenswrapper[5031]: I0129 08:42:47.494218 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68755f559b-bdbfh" Jan 29 08:42:48 crc kubenswrapper[5031]: I0129 08:42:48.796936 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:42:48 crc kubenswrapper[5031]: I0129 08:42:48.796986 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:42:48 crc kubenswrapper[5031]: I0129 08:42:48.850179 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:42:48 crc kubenswrapper[5031]: I0129 08:42:48.909246 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:42:48 crc kubenswrapper[5031]: I0129 08:42:48.910054 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:42:48 crc kubenswrapper[5031]: I0129 08:42:48.949612 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.316596 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.316909 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.355469 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.540677 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.540750 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.560926 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.657873 5031 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.658192 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d" gracePeriod=15 Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.658229 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43" gracePeriod=15 Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.658238 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6" gracePeriod=15 Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.658298 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0" gracePeriod=15 Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.658312 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309" gracePeriod=15 Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.659800 5031 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660019 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660034 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660044 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660049 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660056 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660061 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660070 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660076 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660086 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660091 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660100 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660106 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660117 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660122 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660210 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660222 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660231 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660237 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660244 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660253 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660259 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 08:42:49 crc kubenswrapper[5031]: E0129 08:42:49.660350 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.660358 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.661714 5031 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.662558 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.665983 5031 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.818770 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.819901 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.820022 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.820104 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.820195 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.820269 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.820411 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.820522 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922622 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922682 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922709 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922733 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922751 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922780 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922833 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922847 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922885 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922885 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922904 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922950 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922925 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922951 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922926 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:49 crc kubenswrapper[5031]: I0129 08:42:49.922998 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.504852 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.506230 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.507023 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6" exitCode=0 Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.507053 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0" exitCode=0 Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.507063 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43" exitCode=0 Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.507073 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309" exitCode=2 Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.507203 5031 scope.go:117] "RemoveContainer" containerID="fc1d9355eb1469af2db02950b534cd48ee0ce848563814bcb75d77dd682fc0ae" Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.508932 5031 generic.go:334] "Generic (PLEG): container finished" podID="01a29a62-f408-4268-8e7b-ac409fb04a2b" containerID="d90d0333d8f24316930e7e9ed59915b6b0273c6aaa80c810f12d53292a15e2e8" exitCode=0 Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.509521 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01a29a62-f408-4268-8e7b-ac409fb04a2b","Type":"ContainerDied","Data":"d90d0333d8f24316930e7e9ed59915b6b0273c6aaa80c810f12d53292a15e2e8"} Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.510562 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.753271 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.754084 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:50 crc kubenswrapper[5031]: I0129 08:42:50.754559 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.015222 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:52fb7f7f9a7d2d3d21175c6864cd5075456c631193a6623cfeb4de0361520595\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7f0a24808aa8879239e05099ecd5471a547ae6ce2e1b6747311ec6504f0d44c7\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185659676},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.015891 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.016485 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.016925 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.017226 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.017259 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.185965 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.186795 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.187252 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.187636 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.517194 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.638360 5031 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.638988 5031 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.639161 5031 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.639325 5031 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.639510 5031 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.639531 5031 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.639702 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="200ms" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.787949 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.788868 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.789143 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.789743 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:51 crc kubenswrapper[5031]: E0129 08:42:51.842032 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="400ms" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.958951 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a29a62-f408-4268-8e7b-ac409fb04a2b-kube-api-access\") pod \"01a29a62-f408-4268-8e7b-ac409fb04a2b\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.959008 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-kubelet-dir\") pod \"01a29a62-f408-4268-8e7b-ac409fb04a2b\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.959129 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-var-lock\") pod \"01a29a62-f408-4268-8e7b-ac409fb04a2b\" (UID: \"01a29a62-f408-4268-8e7b-ac409fb04a2b\") " Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.959160 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "01a29a62-f408-4268-8e7b-ac409fb04a2b" (UID: "01a29a62-f408-4268-8e7b-ac409fb04a2b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.959298 5031 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.959300 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-var-lock" (OuterVolumeSpecName: "var-lock") pod "01a29a62-f408-4268-8e7b-ac409fb04a2b" (UID: "01a29a62-f408-4268-8e7b-ac409fb04a2b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:42:51 crc kubenswrapper[5031]: I0129 08:42:51.965553 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a29a62-f408-4268-8e7b-ac409fb04a2b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "01a29a62-f408-4268-8e7b-ac409fb04a2b" (UID: "01a29a62-f408-4268-8e7b-ac409fb04a2b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.060702 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a29a62-f408-4268-8e7b-ac409fb04a2b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.060745 5031 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01a29a62-f408-4268-8e7b-ac409fb04a2b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.097161 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.097241 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.145102 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.145622 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.146003 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.146439 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.146692 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: E0129 08:42:52.243047 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="800ms" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.489481 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.489543 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.527309 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.527608 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.528162 5031 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d" exitCode=0 Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.529771 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01a29a62-f408-4268-8e7b-ac409fb04a2b","Type":"ContainerDied","Data":"f1d202a1b74cb2dfa4346ec507b6395ac4b85311b1fbd2891a5e0a333d3206fd"} Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.529812 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d202a1b74cb2dfa4346ec507b6395ac4b85311b1fbd2891a5e0a333d3206fd" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.529812 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.530573 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.530901 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.531222 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.531599 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.531813 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.564918 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.565318 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.565750 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.566331 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.566604 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.566919 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.570525 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.570789 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.571207 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.571648 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.572391 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.577761 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.578251 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.579105 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.579352 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.579594 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:52 crc kubenswrapper[5031]: I0129 08:42:52.579765 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:53 crc kubenswrapper[5031]: E0129 08:42:53.043819 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="1.6s" Jan 29 08:42:53 crc kubenswrapper[5031]: I0129 08:42:53.999485 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.000929 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.001991 5031 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.002754 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.003574 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.003888 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.004195 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.004572 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.086590 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.087061 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.086745 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.087130 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.087386 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.087467 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.087730 5031 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.087820 5031 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.087884 5031 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.291711 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.544766 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.545996 5031 scope.go:117] "RemoveContainer" containerID="bd8ea91ff5b72ef4af396d3d6411a0a40695c57e65faa15bb9a7f39d6b2226e6" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.546082 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.546868 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.547807 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.548280 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.548871 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.549199 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.549662 5031 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.552826 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.553226 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.553533 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.553806 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.554171 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.554592 5031 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.564482 5031 scope.go:117] "RemoveContainer" containerID="5143a9c7fd123dac34f4795adaf154ce97ad8104ea5eb5c5970f06b95475c4c0" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.578589 5031 scope.go:117] "RemoveContainer" containerID="0f488f9b956eab8e807a633c3dc2160d8f26767c7abc2aaea894e74849c9da43" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.593620 5031 scope.go:117] "RemoveContainer" containerID="e8b3ca6cc68e15576aedab3e7b39b5f2334762676dd2dd7ed0724cf83740a309" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.608333 5031 scope.go:117] "RemoveContainer" containerID="8d816e6345c868a302ebf9091ca2d0c7f9c59cffa637dc7e1038a28a5ca70d0d" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.628502 5031 scope.go:117] "RemoveContainer" containerID="d9fe145046ade01dda252540a2229973beef4a1b53d7846dfb66cbe3b7360dc0" Jan 29 08:42:54 crc kubenswrapper[5031]: E0129 08:42:54.645645 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="3.2s" Jan 29 08:42:54 crc kubenswrapper[5031]: E0129 08:42:54.702104 5031 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:54 crc kubenswrapper[5031]: I0129 08:42:54.702905 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:54 crc kubenswrapper[5031]: E0129 08:42:54.733412 5031 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f271ad265fae2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 08:42:54.732425954 +0000 UTC m=+255.232013896,LastTimestamp:2026-01-29 08:42:54.732425954 +0000 UTC m=+255.232013896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 08:42:55 crc kubenswrapper[5031]: I0129 08:42:55.558177 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"de849a2bb322015303373fe36ccd756ddc2db18205805591f3095a15b043ca6a"} Jan 29 08:42:55 crc kubenswrapper[5031]: I0129 08:42:55.558644 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"42d25f1523b70bffba22ae07307affbd20b326afcb64307a92674f8fae74a38f"} Jan 29 08:42:56 crc kubenswrapper[5031]: E0129 08:42:56.564220 5031 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:42:56 crc kubenswrapper[5031]: I0129 08:42:56.564298 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:56 crc kubenswrapper[5031]: I0129 08:42:56.565967 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:56 crc kubenswrapper[5031]: I0129 08:42:56.566210 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:56 crc kubenswrapper[5031]: I0129 08:42:56.566476 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:56 crc kubenswrapper[5031]: I0129 08:42:56.566685 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:42:57 crc kubenswrapper[5031]: E0129 08:42:57.846941 5031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" interval="6.4s" Jan 29 08:43:00 crc kubenswrapper[5031]: I0129 08:43:00.286145 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:00 crc kubenswrapper[5031]: I0129 08:43:00.286914 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:00 crc kubenswrapper[5031]: I0129 08:43:00.287490 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:00 crc kubenswrapper[5031]: I0129 08:43:00.287890 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:00 crc kubenswrapper[5031]: I0129 08:43:00.288146 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.039152 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:43:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:43:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:43:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:43:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:52fb7f7f9a7d2d3d21175c6864cd5075456c631193a6623cfeb4de0361520595\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7f0a24808aa8879239e05099ecd5471a547ae6ce2e1b6747311ec6504f0d44c7\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185659676},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.039651 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.039843 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.039984 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.040188 5031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.040210 5031 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.281655 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.282339 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.282716 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.283116 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.283389 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.283657 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.296621 5031 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.296660 5031 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.296935 5031 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.297390 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:01 crc kubenswrapper[5031]: W0129 08:43:01.318600 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-159f4c33767bb63899d1f381a6c7974778d8f3c273fcd986f2e8b29d666c6252 WatchSource:0}: Error finding container 159f4c33767bb63899d1f381a6c7974778d8f3c273fcd986f2e8b29d666c6252: Status 404 returned error can't find the container with id 159f4c33767bb63899d1f381a6c7974778d8f3c273fcd986f2e8b29d666c6252 Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.596323 5031 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1cd88c62c60092124ed679f3c8ac496684777534a458bf2777b60eafc1c4083b" exitCode=0 Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.596429 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1cd88c62c60092124ed679f3c8ac496684777534a458bf2777b60eafc1c4083b"} Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.597062 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"159f4c33767bb63899d1f381a6c7974778d8f3c273fcd986f2e8b29d666c6252"} Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.597519 5031 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.597555 5031 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:01 crc kubenswrapper[5031]: E0129 08:43:01.598107 5031 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.598527 5031 status_manager.go:851] "Failed to get status for pod" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" pod="openshift-marketplace/redhat-marketplace-m9hg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-m9hg9\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.599194 5031 status_manager.go:851] "Failed to get status for pod" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" pod="openshift-marketplace/redhat-operators-627gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-627gc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.599608 5031 status_manager.go:851] "Failed to get status for pod" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" pod="openshift-marketplace/redhat-operators-59md2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-59md2\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.599914 5031 status_manager.go:851] "Failed to get status for pod" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:01 crc kubenswrapper[5031]: I0129 08:43:01.600297 5031 status_manager.go:851] "Failed to get status for pod" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" pod="openshift-marketplace/redhat-marketplace-8gmmw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8gmmw\": dial tcp 38.129.56.153:6443: connect: connection refused" Jan 29 08:43:02 crc kubenswrapper[5031]: I0129 08:43:02.606205 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"750bc0ac36c2f2e489fee31685424da4cc0e96bcd5a954bfa60039c0d08155b4"} Jan 29 08:43:02 crc kubenswrapper[5031]: I0129 08:43:02.606508 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"161b12c689f6d04cfdfca7650ae982f24a7d0fcaee7ac1f46bf0e08311779e7d"} Jan 29 08:43:02 crc kubenswrapper[5031]: I0129 08:43:02.606524 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4805a6d74ce6426a730939ada1df06cc988724fb010d0579d7d5831e516b8766"} Jan 29 08:43:02 crc kubenswrapper[5031]: I0129 08:43:02.606535 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c91c4eec8986d35ef75371bb644df7daf317527662891f85a357ff01bb22a199"} Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.651449 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.651811 5031 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12" exitCode=1 Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.651888 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12"} Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.652609 5031 scope.go:117] "RemoveContainer" containerID="1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12" Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.655395 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ac593ebad40fc17585a9f7acbcd0c91bc28ea5c46dc73b96afa736a3777f8f5"} Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.655536 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.655601 5031 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:03 crc kubenswrapper[5031]: I0129 08:43:03.655621 5031 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:04 crc kubenswrapper[5031]: I0129 08:43:04.665107 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 08:43:04 crc kubenswrapper[5031]: I0129 08:43:04.665186 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aba7891cb5eaa4fc09e428e3bcf73fb2e3892bf9c5c364a266a52b1b436c043a"} Jan 29 08:43:06 crc kubenswrapper[5031]: I0129 08:43:06.298095 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:06 crc kubenswrapper[5031]: I0129 08:43:06.298176 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:06 crc kubenswrapper[5031]: I0129 08:43:06.308759 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:06 crc kubenswrapper[5031]: I0129 08:43:06.681019 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:43:06 crc kubenswrapper[5031]: I0129 08:43:06.681296 5031 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 08:43:06 crc kubenswrapper[5031]: I0129 08:43:06.681433 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 08:43:07 crc kubenswrapper[5031]: I0129 08:43:07.720107 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:43:08 crc kubenswrapper[5031]: I0129 08:43:08.663798 5031 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:08 crc kubenswrapper[5031]: I0129 08:43:08.689117 5031 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:08 crc kubenswrapper[5031]: I0129 08:43:08.689660 5031 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:08 crc kubenswrapper[5031]: I0129 08:43:08.692379 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:09 crc kubenswrapper[5031]: I0129 08:43:09.696952 5031 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:09 crc kubenswrapper[5031]: I0129 08:43:09.696983 5031 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02d75cf5-b6e2-4154-ba13-d7ce17d37394" Jan 29 08:43:10 crc kubenswrapper[5031]: I0129 08:43:10.303736 5031 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d347593a-eeb7-4fa3-9a6f-8d2cadb8789b" Jan 29 08:43:14 crc kubenswrapper[5031]: I0129 08:43:14.732846 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 08:43:16 crc kubenswrapper[5031]: I0129 08:43:16.294670 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 08:43:16 crc kubenswrapper[5031]: I0129 08:43:16.681354 5031 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 08:43:16 crc kubenswrapper[5031]: I0129 08:43:16.681757 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 08:43:16 crc kubenswrapper[5031]: I0129 08:43:16.729770 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 08:43:16 crc kubenswrapper[5031]: I0129 08:43:16.850002 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 08:43:17 crc kubenswrapper[5031]: I0129 08:43:17.370938 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 08:43:17 crc kubenswrapper[5031]: I0129 08:43:17.431161 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 08:43:17 crc kubenswrapper[5031]: I0129 08:43:17.795711 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 08:43:19 crc kubenswrapper[5031]: I0129 08:43:19.736390 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 08:43:19 crc kubenswrapper[5031]: I0129 08:43:19.787309 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 08:43:19 crc kubenswrapper[5031]: I0129 08:43:19.833917 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 08:43:19 crc kubenswrapper[5031]: I0129 08:43:19.996880 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 08:43:20 crc kubenswrapper[5031]: I0129 08:43:20.073488 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 08:43:20 crc kubenswrapper[5031]: I0129 08:43:20.124525 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 08:43:20 crc kubenswrapper[5031]: I0129 08:43:20.217727 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 08:43:20 crc kubenswrapper[5031]: I0129 08:43:20.296037 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 08:43:20 crc kubenswrapper[5031]: I0129 08:43:20.539536 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 08:43:21 crc kubenswrapper[5031]: I0129 08:43:21.028749 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 08:43:21 crc kubenswrapper[5031]: I0129 08:43:21.122532 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 08:43:21 crc kubenswrapper[5031]: I0129 08:43:21.247946 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 08:43:21 crc kubenswrapper[5031]: I0129 08:43:21.342050 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 08:43:21 crc kubenswrapper[5031]: I0129 08:43:21.767427 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 08:43:21 crc kubenswrapper[5031]: I0129 08:43:21.928916 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.190214 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.223440 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.225843 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.369700 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.388153 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.411769 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.460301 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.484164 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.709093 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.752156 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.774886 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.811244 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.867674 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.896482 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 08:43:22 crc kubenswrapper[5031]: I0129 08:43:22.900241 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.102045 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.122773 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.301471 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.407658 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.610645 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.656781 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.773979 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.782022 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 08:43:23 crc kubenswrapper[5031]: I0129 08:43:23.791315 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.035875 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.057257 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.073103 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.075766 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.077600 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.089108 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.245854 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.264774 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.488890 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.550097 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.562930 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.579928 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.582132 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.582426 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.677702 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.837642 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.936885 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 08:43:24 crc kubenswrapper[5031]: I0129 08:43:24.984046 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.002235 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.016815 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.045553 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.056349 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.096869 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.132311 5031 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.162083 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.282809 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.300151 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.319674 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.328734 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.507300 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.509511 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.581046 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.696818 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.723980 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.765952 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.768403 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.776955 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.839821 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.929467 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.936651 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.956579 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 08:43:25 crc kubenswrapper[5031]: I0129 08:43:25.959067 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.019177 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.025607 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.079582 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.186339 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.257252 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.270072 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.358562 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.396445 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.539965 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.590027 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.605849 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.624330 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.651140 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.681709 5031 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.681783 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.681832 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.682475 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"aba7891cb5eaa4fc09e428e3bcf73fb2e3892bf9c5c364a266a52b1b436c043a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.682580 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://aba7891cb5eaa4fc09e428e3bcf73fb2e3892bf9c5c364a266a52b1b436c043a" gracePeriod=30 Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.727391 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.736843 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.763925 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.843117 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.985796 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 08:43:26 crc kubenswrapper[5031]: I0129 08:43:26.993769 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.174798 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.177198 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.194169 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.354920 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.360299 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.363928 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.405620 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.407804 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.458374 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.491096 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.545973 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.551578 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.614251 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.659759 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.749341 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.841499 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 08:43:27 crc kubenswrapper[5031]: I0129 08:43:27.948616 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.022175 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.030094 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.109100 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.141981 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.151541 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.513913 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.550003 5031 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.577408 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.602599 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.685214 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.700697 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.704225 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.721913 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.772538 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.793988 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.813097 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.819946 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.828719 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 08:43:28 crc kubenswrapper[5031]: I0129 08:43:28.960029 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.007467 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.011662 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.071264 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.147246 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.223682 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.224637 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.273431 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.322672 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.329808 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.378324 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.378674 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.379716 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.530855 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.656696 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.744143 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.772720 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.791688 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.824145 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.897800 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.962124 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 08:43:29 crc kubenswrapper[5031]: I0129 08:43:29.970781 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.000467 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.034235 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.093156 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.105676 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.603780 5031 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.619483 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.670352 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.682947 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.720202 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.722003 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.747629 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.798105 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 08:43:30 crc kubenswrapper[5031]: I0129 08:43:30.985529 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.067967 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.147507 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.251953 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.402604 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.409739 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.531125 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.596174 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.619943 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.637169 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.658191 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.798058 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.810347 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.841762 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.931351 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 08:43:31 crc kubenswrapper[5031]: I0129 08:43:31.979867 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.056325 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.082626 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.089249 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.129771 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.169761 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.197963 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.221234 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.261231 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.297053 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.305525 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.466826 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.520769 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.627607 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.708819 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.848717 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 08:43:32 crc kubenswrapper[5031]: I0129 08:43:32.874361 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.014740 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.039395 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.118310 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.141670 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.249122 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.319541 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.330473 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.433615 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.497802 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.553640 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.565341 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.578140 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.630550 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.732110 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 08:43:33 crc kubenswrapper[5031]: I0129 08:43:33.802376 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.086653 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.255398 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.376450 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.467863 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.491320 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.619690 5031 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.700940 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.711797 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 08:43:34 crc kubenswrapper[5031]: I0129 08:43:34.833581 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.307483 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.348192 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.455273 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.660643 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.686500 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.981389 5031 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.986819 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.986884 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:43:35 crc kubenswrapper[5031]: I0129 08:43:35.992596 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.005269 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.005251031 podStartE2EDuration="28.005251031s" podCreationTimestamp="2026-01-29 08:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.004697925 +0000 UTC m=+296.504285877" watchObservedRunningTime="2026-01-29 08:43:36.005251031 +0000 UTC m=+296.504839003" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.118326 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.308735 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.518046 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.573891 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.648643 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.712517 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.793812 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.885323 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.896454 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 08:43:36 crc kubenswrapper[5031]: I0129 08:43:36.930798 5031 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 08:43:37 crc kubenswrapper[5031]: I0129 08:43:37.202159 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 08:43:37 crc kubenswrapper[5031]: I0129 08:43:37.332718 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 08:43:38 crc kubenswrapper[5031]: I0129 08:43:38.049831 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 08:43:38 crc kubenswrapper[5031]: I0129 08:43:38.244927 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 08:43:40 crc kubenswrapper[5031]: I0129 08:43:40.070963 5031 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 08:43:42 crc kubenswrapper[5031]: I0129 08:43:42.319930 5031 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 08:43:42 crc kubenswrapper[5031]: I0129 08:43:42.320902 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://de849a2bb322015303373fe36ccd756ddc2db18205805591f3095a15b043ca6a" gracePeriod=5 Jan 29 08:43:47 crc kubenswrapper[5031]: I0129 08:43:47.911630 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 08:43:47 crc kubenswrapper[5031]: I0129 08:43:47.912580 5031 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="de849a2bb322015303373fe36ccd756ddc2db18205805591f3095a15b043ca6a" exitCode=137 Jan 29 08:43:47 crc kubenswrapper[5031]: I0129 08:43:47.912649 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42d25f1523b70bffba22ae07307affbd20b326afcb64307a92674f8fae74a38f" Jan 29 08:43:47 crc kubenswrapper[5031]: I0129 08:43:47.911630 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 08:43:47 crc kubenswrapper[5031]: I0129 08:43:47.912749 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099569 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099721 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099759 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099783 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099792 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099809 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099843 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.099876 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.100038 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.100229 5031 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.100247 5031 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.100258 5031 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.100268 5031 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.107568 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.201617 5031 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.292185 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 08:43:48 crc kubenswrapper[5031]: I0129 08:43:48.917656 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:43:56 crc kubenswrapper[5031]: I0129 08:43:56.969041 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 08:43:56 crc kubenswrapper[5031]: I0129 08:43:56.972967 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 08:43:56 crc kubenswrapper[5031]: I0129 08:43:56.973032 5031 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="aba7891cb5eaa4fc09e428e3bcf73fb2e3892bf9c5c364a266a52b1b436c043a" exitCode=137 Jan 29 08:43:56 crc kubenswrapper[5031]: I0129 08:43:56.973089 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"aba7891cb5eaa4fc09e428e3bcf73fb2e3892bf9c5c364a266a52b1b436c043a"} Jan 29 08:43:56 crc kubenswrapper[5031]: I0129 08:43:56.973130 5031 scope.go:117] "RemoveContainer" containerID="1625c8dce1930baeef8fbde3e83a68e9cfe223cb950a40929bb02ead92b3fc12" Jan 29 08:43:57 crc kubenswrapper[5031]: I0129 08:43:57.981274 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 08:43:57 crc kubenswrapper[5031]: I0129 08:43:57.982978 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d847a6c9c2d5895bc8e8a86b6a54c017e398e863c274cd71b0fc48966b9904be"} Jan 29 08:44:06 crc kubenswrapper[5031]: I0129 08:44:06.681240 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:44:06 crc kubenswrapper[5031]: I0129 08:44:06.686313 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:44:07 crc kubenswrapper[5031]: I0129 08:44:07.035315 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:44:07 crc kubenswrapper[5031]: I0129 08:44:07.039199 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:44:16 crc kubenswrapper[5031]: I0129 08:44:16.600199 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhkg4"] Jan 29 08:44:16 crc kubenswrapper[5031]: I0129 08:44:16.600953 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhkg4" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="registry-server" containerID="cri-o://b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9" gracePeriod=2 Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.014433 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.044639 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-catalog-content\") pod \"b6115352-f309-492c-a7d9-c36ddb9e2454\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.044703 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8fv\" (UniqueName: \"kubernetes.io/projected/b6115352-f309-492c-a7d9-c36ddb9e2454-kube-api-access-6k8fv\") pod \"b6115352-f309-492c-a7d9-c36ddb9e2454\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.044758 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-utilities\") pod \"b6115352-f309-492c-a7d9-c36ddb9e2454\" (UID: \"b6115352-f309-492c-a7d9-c36ddb9e2454\") " Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.045864 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-utilities" (OuterVolumeSpecName: "utilities") pod "b6115352-f309-492c-a7d9-c36ddb9e2454" (UID: "b6115352-f309-492c-a7d9-c36ddb9e2454"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.052992 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6115352-f309-492c-a7d9-c36ddb9e2454-kube-api-access-6k8fv" (OuterVolumeSpecName: "kube-api-access-6k8fv") pod "b6115352-f309-492c-a7d9-c36ddb9e2454" (UID: "b6115352-f309-492c-a7d9-c36ddb9e2454"). InnerVolumeSpecName "kube-api-access-6k8fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.088965 5031 generic.go:334] "Generic (PLEG): container finished" podID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerID="b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9" exitCode=0 Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.089008 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkg4" event={"ID":"b6115352-f309-492c-a7d9-c36ddb9e2454","Type":"ContainerDied","Data":"b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9"} Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.089019 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkg4" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.089039 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkg4" event={"ID":"b6115352-f309-492c-a7d9-c36ddb9e2454","Type":"ContainerDied","Data":"1ab59214313cd989532506e8231ccd9d0bf62ff51e1017e7817c4ba345e05a84"} Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.089062 5031 scope.go:117] "RemoveContainer" containerID="b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.111633 5031 scope.go:117] "RemoveContainer" containerID="194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.117770 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6115352-f309-492c-a7d9-c36ddb9e2454" (UID: "b6115352-f309-492c-a7d9-c36ddb9e2454"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.145468 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.145506 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8fv\" (UniqueName: \"kubernetes.io/projected/b6115352-f309-492c-a7d9-c36ddb9e2454-kube-api-access-6k8fv\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.145521 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6115352-f309-492c-a7d9-c36ddb9e2454-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.148787 5031 scope.go:117] "RemoveContainer" containerID="6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.160557 5031 scope.go:117] "RemoveContainer" containerID="b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9" Jan 29 08:44:17 crc kubenswrapper[5031]: E0129 08:44:17.161081 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9\": container with ID starting with b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9 not found: ID does not exist" containerID="b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.161121 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9"} err="failed to get container status \"b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9\": rpc error: code = NotFound desc = could not find container \"b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9\": container with ID starting with b14a66f8308d1e03833a41796b8c9db7ecfdfd3014ad8116121e3b60be7025b9 not found: ID does not exist" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.161147 5031 scope.go:117] "RemoveContainer" containerID="194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65" Jan 29 08:44:17 crc kubenswrapper[5031]: E0129 08:44:17.161389 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65\": container with ID starting with 194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65 not found: ID does not exist" containerID="194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.161419 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65"} err="failed to get container status \"194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65\": rpc error: code = NotFound desc = could not find container \"194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65\": container with ID starting with 194c9ec1b5a469e1fad8190235e98a372330f33b353693b05556cb4fa4201d65 not found: ID does not exist" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.161437 5031 scope.go:117] "RemoveContainer" containerID="6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd" Jan 29 08:44:17 crc kubenswrapper[5031]: E0129 08:44:17.161721 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd\": container with ID starting with 6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd not found: ID does not exist" containerID="6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.161750 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd"} err="failed to get container status \"6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd\": rpc error: code = NotFound desc = could not find container \"6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd\": container with ID starting with 6de7a6065c79400c3354bfe73074bd5d4bf9fb0a674e6201a931e3b0364500bd not found: ID does not exist" Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.426683 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhkg4"] Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.430670 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhkg4"] Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.792868 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jx726"] Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.793119 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" podUID="b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" containerName="controller-manager" containerID="cri-o://38caea6d6d82b7b75f5312c927e6271bb4869424178cfae113a12bcc6f1ffe0b" gracePeriod=30 Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.802319 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv"] Jan 29 08:44:17 crc kubenswrapper[5031]: I0129 08:44:17.802646 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" podUID="55a0f308-38a2-4bcf-b125-d7c0fa28f036" containerName="route-controller-manager" containerID="cri-o://b0b793f3f52611d2d823fa6cf7d723454c3742c3f1b447fcef7332e67c479a0f" gracePeriod=30 Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.096719 5031 generic.go:334] "Generic (PLEG): container finished" podID="b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" containerID="38caea6d6d82b7b75f5312c927e6271bb4869424178cfae113a12bcc6f1ffe0b" exitCode=0 Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.096786 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" event={"ID":"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b","Type":"ContainerDied","Data":"38caea6d6d82b7b75f5312c927e6271bb4869424178cfae113a12bcc6f1ffe0b"} Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.098723 5031 generic.go:334] "Generic (PLEG): container finished" podID="55a0f308-38a2-4bcf-b125-d7c0fa28f036" containerID="b0b793f3f52611d2d823fa6cf7d723454c3742c3f1b447fcef7332e67c479a0f" exitCode=0 Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.098797 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" event={"ID":"55a0f308-38a2-4bcf-b125-d7c0fa28f036","Type":"ContainerDied","Data":"b0b793f3f52611d2d823fa6cf7d723454c3742c3f1b447fcef7332e67c479a0f"} Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.200181 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gmmw"] Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.200510 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8gmmw" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="registry-server" containerID="cri-o://1ac1f65e25b1cfd44fb2f006c6e2841be65a87de39c1c6fdcb84a0cdc795f2b1" gracePeriod=2 Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.289764 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" path="/var/lib/kubelet/pods/b6115352-f309-492c-a7d9-c36ddb9e2454/volumes" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.854845 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.864656 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-client-ca\") pod \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.864744 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-proxy-ca-bundles\") pod \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.864773 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl4xx\" (UniqueName: \"kubernetes.io/projected/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-kube-api-access-fl4xx\") pod \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.864818 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-config\") pod \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.864845 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-serving-cert\") pod \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\" (UID: \"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.865786 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" (UID: "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.868423 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" (UID: "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.868824 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-config" (OuterVolumeSpecName: "config") pod "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" (UID: "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.881921 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-kube-api-access-fl4xx" (OuterVolumeSpecName: "kube-api-access-fl4xx") pod "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" (UID: "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b"). InnerVolumeSpecName "kube-api-access-fl4xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.885747 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" (UID: "b8e8d571-a5e6-4ab6-acdf-0317889f6d2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.962028 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965174 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-client-ca\") pod \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965209 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-config\") pod \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965231 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sblkd\" (UniqueName: \"kubernetes.io/projected/55a0f308-38a2-4bcf-b125-d7c0fa28f036-kube-api-access-sblkd\") pod \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965300 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a0f308-38a2-4bcf-b125-d7c0fa28f036-serving-cert\") pod \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\" (UID: \"55a0f308-38a2-4bcf-b125-d7c0fa28f036\") " Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965464 5031 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965477 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl4xx\" (UniqueName: \"kubernetes.io/projected/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-kube-api-access-fl4xx\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965488 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965497 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.965506 5031 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.966237 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-config" (OuterVolumeSpecName: "config") pod "55a0f308-38a2-4bcf-b125-d7c0fa28f036" (UID: "55a0f308-38a2-4bcf-b125-d7c0fa28f036"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.966513 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-client-ca" (OuterVolumeSpecName: "client-ca") pod "55a0f308-38a2-4bcf-b125-d7c0fa28f036" (UID: "55a0f308-38a2-4bcf-b125-d7c0fa28f036"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.969527 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a0f308-38a2-4bcf-b125-d7c0fa28f036-kube-api-access-sblkd" (OuterVolumeSpecName: "kube-api-access-sblkd") pod "55a0f308-38a2-4bcf-b125-d7c0fa28f036" (UID: "55a0f308-38a2-4bcf-b125-d7c0fa28f036"). InnerVolumeSpecName "kube-api-access-sblkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:18 crc kubenswrapper[5031]: I0129 08:44:18.972233 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a0f308-38a2-4bcf-b125-d7c0fa28f036-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55a0f308-38a2-4bcf-b125-d7c0fa28f036" (UID: "55a0f308-38a2-4bcf-b125-d7c0fa28f036"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.029810 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6878677f69-scj4v"] Jan 29 08:44:19 crc kubenswrapper[5031]: E0129 08:44:19.030011 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" containerName="controller-manager" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030023 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" containerName="controller-manager" Jan 29 08:44:19 crc kubenswrapper[5031]: E0129 08:44:19.030036 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="extract-utilities" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030041 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="extract-utilities" Jan 29 08:44:19 crc kubenswrapper[5031]: E0129 08:44:19.030048 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030055 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 08:44:19 crc kubenswrapper[5031]: E0129 08:44:19.030066 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="extract-content" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030071 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="extract-content" Jan 29 08:44:19 crc kubenswrapper[5031]: E0129 08:44:19.030082 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="registry-server" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030089 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="registry-server" Jan 29 08:44:19 crc kubenswrapper[5031]: E0129 08:44:19.030100 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" containerName="installer" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030105 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" containerName="installer" Jan 29 08:44:19 crc kubenswrapper[5031]: E0129 08:44:19.030112 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a0f308-38a2-4bcf-b125-d7c0fa28f036" containerName="route-controller-manager" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030118 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a0f308-38a2-4bcf-b125-d7c0fa28f036" containerName="route-controller-manager" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030236 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030248 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a0f308-38a2-4bcf-b125-d7c0fa28f036" containerName="route-controller-manager" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030257 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6115352-f309-492c-a7d9-c36ddb9e2454" containerName="registry-server" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030264 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" containerName="controller-manager" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030272 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a29a62-f408-4268-8e7b-ac409fb04a2b" containerName="installer" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.030745 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.035711 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6878677f69-scj4v"] Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066337 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n75jv\" (UniqueName: \"kubernetes.io/projected/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-kube-api-access-n75jv\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066428 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-config\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066463 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-client-ca\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066532 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-serving-cert\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066592 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-proxy-ca-bundles\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066677 5031 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066691 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a0f308-38a2-4bcf-b125-d7c0fa28f036-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066703 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sblkd\" (UniqueName: \"kubernetes.io/projected/55a0f308-38a2-4bcf-b125-d7c0fa28f036-kube-api-access-sblkd\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.066719 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a0f308-38a2-4bcf-b125-d7c0fa28f036-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.105984 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" event={"ID":"55a0f308-38a2-4bcf-b125-d7c0fa28f036","Type":"ContainerDied","Data":"e3e821692e31ebf39b4c36b7c51949d8f1d553c8bbedb32869abd6d2d8a893fc"} Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.106570 5031 scope.go:117] "RemoveContainer" containerID="b0b793f3f52611d2d823fa6cf7d723454c3742c3f1b447fcef7332e67c479a0f" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.106028 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.108281 5031 generic.go:334] "Generic (PLEG): container finished" podID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerID="1ac1f65e25b1cfd44fb2f006c6e2841be65a87de39c1c6fdcb84a0cdc795f2b1" exitCode=0 Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.108644 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gmmw" event={"ID":"4ecedf13-919d-482a-bfa7-71e66368c9ef","Type":"ContainerDied","Data":"1ac1f65e25b1cfd44fb2f006c6e2841be65a87de39c1c6fdcb84a0cdc795f2b1"} Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.110133 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" event={"ID":"b8e8d571-a5e6-4ab6-acdf-0317889f6d2b","Type":"ContainerDied","Data":"d4de11f8d133064eeec7d1cba93b4bc97d26a2c76953e3b4e86c45fefc316d80"} Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.110188 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jx726" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.122410 5031 scope.go:117] "RemoveContainer" containerID="38caea6d6d82b7b75f5312c927e6271bb4869424178cfae113a12bcc6f1ffe0b" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.137972 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv"] Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.145130 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cbgdv"] Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.157420 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jx726"] Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.160528 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jx726"] Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.168305 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-proxy-ca-bundles\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.168402 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n75jv\" (UniqueName: \"kubernetes.io/projected/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-kube-api-access-n75jv\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.168458 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-config\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.168493 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-client-ca\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.168517 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-serving-cert\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.169941 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-proxy-ca-bundles\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.170056 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-client-ca\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.170385 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-config\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.172817 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-serving-cert\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.187013 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n75jv\" (UniqueName: \"kubernetes.io/projected/0a42f135-7c2b-4cca-8fd3-d326ff240f0c-kube-api-access-n75jv\") pod \"controller-manager-6878677f69-scj4v\" (UID: \"0a42f135-7c2b-4cca-8fd3-d326ff240f0c\") " pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.198120 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-59md2"] Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.198355 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-59md2" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="registry-server" containerID="cri-o://12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d" gracePeriod=2 Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.345265 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.478572 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.586456 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-catalog-content\") pod \"4ecedf13-919d-482a-bfa7-71e66368c9ef\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.586636 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7222\" (UniqueName: \"kubernetes.io/projected/4ecedf13-919d-482a-bfa7-71e66368c9ef-kube-api-access-z7222\") pod \"4ecedf13-919d-482a-bfa7-71e66368c9ef\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.586791 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-utilities\") pod \"4ecedf13-919d-482a-bfa7-71e66368c9ef\" (UID: \"4ecedf13-919d-482a-bfa7-71e66368c9ef\") " Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.588179 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-utilities" (OuterVolumeSpecName: "utilities") pod "4ecedf13-919d-482a-bfa7-71e66368c9ef" (UID: "4ecedf13-919d-482a-bfa7-71e66368c9ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.591847 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ecedf13-919d-482a-bfa7-71e66368c9ef-kube-api-access-z7222" (OuterVolumeSpecName: "kube-api-access-z7222") pod "4ecedf13-919d-482a-bfa7-71e66368c9ef" (UID: "4ecedf13-919d-482a-bfa7-71e66368c9ef"). InnerVolumeSpecName "kube-api-access-z7222". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.611722 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ecedf13-919d-482a-bfa7-71e66368c9ef" (UID: "4ecedf13-919d-482a-bfa7-71e66368c9ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.688617 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7222\" (UniqueName: \"kubernetes.io/projected/4ecedf13-919d-482a-bfa7-71e66368c9ef-kube-api-access-z7222\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.688652 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.688662 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecedf13-919d-482a-bfa7-71e66368c9ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:19 crc kubenswrapper[5031]: I0129 08:44:19.857609 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6878677f69-scj4v"] Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.091841 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.120055 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gmmw" event={"ID":"4ecedf13-919d-482a-bfa7-71e66368c9ef","Type":"ContainerDied","Data":"99bb78e0754aeeed69936aa10d9743e79316b360138af43598a78292af6ce0ba"} Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.120139 5031 scope.go:117] "RemoveContainer" containerID="1ac1f65e25b1cfd44fb2f006c6e2841be65a87de39c1c6fdcb84a0cdc795f2b1" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.120302 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gmmw" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.136637 5031 generic.go:334] "Generic (PLEG): container finished" podID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerID="12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d" exitCode=0 Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.136761 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59md2" event={"ID":"0c7d881b-8764-42f1-a4db-87cde90a3a70","Type":"ContainerDied","Data":"12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d"} Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.136793 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59md2" event={"ID":"0c7d881b-8764-42f1-a4db-87cde90a3a70","Type":"ContainerDied","Data":"75c53a40bbbe0fce21357b29cf51dacd2f5934d6a88b09423e227198f4d2856b"} Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.136846 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59md2" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.143743 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" event={"ID":"0a42f135-7c2b-4cca-8fd3-d326ff240f0c","Type":"ContainerStarted","Data":"47940cff7835b83b748c650a69f6d8ed8a399cccdf2914badf6c45281d860f17"} Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.143788 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" event={"ID":"0a42f135-7c2b-4cca-8fd3-d326ff240f0c","Type":"ContainerStarted","Data":"4c498bb2ebb6cc9adce71206d7f8066757bf07315ec2761b34c9787d7ec47eaf"} Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.144808 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.146740 5031 patch_prober.go:28] interesting pod/controller-manager-6878677f69-scj4v container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.146810 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" podUID="0a42f135-7c2b-4cca-8fd3-d326ff240f0c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.148594 5031 scope.go:117] "RemoveContainer" containerID="17a309a531deedda1b69c3016d37232c4597ee53fd9f42d349e3040d8ac31447" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.197335 5031 scope.go:117] "RemoveContainer" containerID="9a11c41b52d590b9fdf68564f04868a6d2deedcbcd8b7e9a8f457bcf0bf299e7" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.198841 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-utilities\") pod \"0c7d881b-8764-42f1-a4db-87cde90a3a70\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.198902 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn4gm\" (UniqueName: \"kubernetes.io/projected/0c7d881b-8764-42f1-a4db-87cde90a3a70-kube-api-access-zn4gm\") pod \"0c7d881b-8764-42f1-a4db-87cde90a3a70\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.198957 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-catalog-content\") pod \"0c7d881b-8764-42f1-a4db-87cde90a3a70\" (UID: \"0c7d881b-8764-42f1-a4db-87cde90a3a70\") " Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.201187 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" podStartSLOduration=3.201168039 podStartE2EDuration="3.201168039s" podCreationTimestamp="2026-01-29 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:44:20.194680194 +0000 UTC m=+340.694268146" watchObservedRunningTime="2026-01-29 08:44:20.201168039 +0000 UTC m=+340.700755991" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.203347 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-utilities" (OuterVolumeSpecName: "utilities") pod "0c7d881b-8764-42f1-a4db-87cde90a3a70" (UID: "0c7d881b-8764-42f1-a4db-87cde90a3a70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.207753 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7d881b-8764-42f1-a4db-87cde90a3a70-kube-api-access-zn4gm" (OuterVolumeSpecName: "kube-api-access-zn4gm") pod "0c7d881b-8764-42f1-a4db-87cde90a3a70" (UID: "0c7d881b-8764-42f1-a4db-87cde90a3a70"). InnerVolumeSpecName "kube-api-access-zn4gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.235544 5031 scope.go:117] "RemoveContainer" containerID="12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.243435 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gmmw"] Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.250842 5031 scope.go:117] "RemoveContainer" containerID="8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.251141 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gmmw"] Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.280725 5031 scope.go:117] "RemoveContainer" containerID="ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.292477 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" path="/var/lib/kubelet/pods/4ecedf13-919d-482a-bfa7-71e66368c9ef/volumes" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.293424 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a0f308-38a2-4bcf-b125-d7c0fa28f036" path="/var/lib/kubelet/pods/55a0f308-38a2-4bcf-b125-d7c0fa28f036/volumes" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.297116 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e8d571-a5e6-4ab6-acdf-0317889f6d2b" path="/var/lib/kubelet/pods/b8e8d571-a5e6-4ab6-acdf-0317889f6d2b/volumes" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.300741 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.300772 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn4gm\" (UniqueName: \"kubernetes.io/projected/0c7d881b-8764-42f1-a4db-87cde90a3a70-kube-api-access-zn4gm\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.304608 5031 scope.go:117] "RemoveContainer" containerID="12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d" Jan 29 08:44:20 crc kubenswrapper[5031]: E0129 08:44:20.306988 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d\": container with ID starting with 12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d not found: ID does not exist" containerID="12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.307055 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d"} err="failed to get container status \"12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d\": rpc error: code = NotFound desc = could not find container \"12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d\": container with ID starting with 12a7b123d8ed6827793b2eeb0b426de782c3f89ba000b121ad8f7b5dabf05b2d not found: ID does not exist" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.307098 5031 scope.go:117] "RemoveContainer" containerID="8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630" Jan 29 08:44:20 crc kubenswrapper[5031]: E0129 08:44:20.307509 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630\": container with ID starting with 8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630 not found: ID does not exist" containerID="8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.307548 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630"} err="failed to get container status \"8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630\": rpc error: code = NotFound desc = could not find container \"8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630\": container with ID starting with 8baf824245a1e7ebbbde9359624a6241e20610e75d3d8b09cc29e762af098630 not found: ID does not exist" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.307568 5031 scope.go:117] "RemoveContainer" containerID="ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4" Jan 29 08:44:20 crc kubenswrapper[5031]: E0129 08:44:20.307905 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4\": container with ID starting with ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4 not found: ID does not exist" containerID="ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.307938 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4"} err="failed to get container status \"ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4\": rpc error: code = NotFound desc = could not find container \"ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4\": container with ID starting with ded253a180acea2fe243bc93abe12656cda1d921143c1eab39b97173e73579b4 not found: ID does not exist" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.368588 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c7d881b-8764-42f1-a4db-87cde90a3a70" (UID: "0c7d881b-8764-42f1-a4db-87cde90a3a70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.402350 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7d881b-8764-42f1-a4db-87cde90a3a70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.470693 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-59md2"] Jan 29 08:44:20 crc kubenswrapper[5031]: I0129 08:44:20.475725 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-59md2"] Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022316 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp"] Jan 29 08:44:21 crc kubenswrapper[5031]: E0129 08:44:21.022546 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="extract-content" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022561 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="extract-content" Jan 29 08:44:21 crc kubenswrapper[5031]: E0129 08:44:21.022575 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="extract-utilities" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022580 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="extract-utilities" Jan 29 08:44:21 crc kubenswrapper[5031]: E0129 08:44:21.022589 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="registry-server" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022596 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="registry-server" Jan 29 08:44:21 crc kubenswrapper[5031]: E0129 08:44:21.022603 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="extract-utilities" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022609 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="extract-utilities" Jan 29 08:44:21 crc kubenswrapper[5031]: E0129 08:44:21.022619 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="extract-content" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022624 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="extract-content" Jan 29 08:44:21 crc kubenswrapper[5031]: E0129 08:44:21.022639 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="registry-server" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022644 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="registry-server" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022730 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ecedf13-919d-482a-bfa7-71e66368c9ef" containerName="registry-server" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.022746 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" containerName="registry-server" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.023087 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.027018 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.027144 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.027186 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.027144 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.027474 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.027731 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.038119 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp"] Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.156326 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6878677f69-scj4v" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.212324 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kch7\" (UniqueName: \"kubernetes.io/projected/8f864087-7d0c-4b76-a02f-42ee04add66a-kube-api-access-2kch7\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.212702 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-client-ca\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.212726 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-config\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.212742 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f864087-7d0c-4b76-a02f-42ee04add66a-serving-cert\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.313973 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kch7\" (UniqueName: \"kubernetes.io/projected/8f864087-7d0c-4b76-a02f-42ee04add66a-kube-api-access-2kch7\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.314317 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-client-ca\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.314503 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-config\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.314619 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f864087-7d0c-4b76-a02f-42ee04add66a-serving-cert\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.315828 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-config\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.315912 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-client-ca\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.329517 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f864087-7d0c-4b76-a02f-42ee04add66a-serving-cert\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.331830 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kch7\" (UniqueName: \"kubernetes.io/projected/8f864087-7d0c-4b76-a02f-42ee04add66a-kube-api-access-2kch7\") pod \"route-controller-manager-5658b8d798-fmnmp\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.337640 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:21 crc kubenswrapper[5031]: I0129 08:44:21.516972 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp"] Jan 29 08:44:21 crc kubenswrapper[5031]: W0129 08:44:21.525217 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f864087_7d0c_4b76_a02f_42ee04add66a.slice/crio-57d078b8f6e3b9a8ead9a82a4f0376b51f6384a2780f6f9463eb0b6d835a1deb WatchSource:0}: Error finding container 57d078b8f6e3b9a8ead9a82a4f0376b51f6384a2780f6f9463eb0b6d835a1deb: Status 404 returned error can't find the container with id 57d078b8f6e3b9a8ead9a82a4f0376b51f6384a2780f6f9463eb0b6d835a1deb Jan 29 08:44:22 crc kubenswrapper[5031]: I0129 08:44:22.156129 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" event={"ID":"8f864087-7d0c-4b76-a02f-42ee04add66a","Type":"ContainerStarted","Data":"7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d"} Jan 29 08:44:22 crc kubenswrapper[5031]: I0129 08:44:22.157015 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" event={"ID":"8f864087-7d0c-4b76-a02f-42ee04add66a","Type":"ContainerStarted","Data":"57d078b8f6e3b9a8ead9a82a4f0376b51f6384a2780f6f9463eb0b6d835a1deb"} Jan 29 08:44:22 crc kubenswrapper[5031]: I0129 08:44:22.174094 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" podStartSLOduration=5.174074595 podStartE2EDuration="5.174074595s" podCreationTimestamp="2026-01-29 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:44:22.171689927 +0000 UTC m=+342.671277879" watchObservedRunningTime="2026-01-29 08:44:22.174074595 +0000 UTC m=+342.673662547" Jan 29 08:44:22 crc kubenswrapper[5031]: I0129 08:44:22.289826 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7d881b-8764-42f1-a4db-87cde90a3a70" path="/var/lib/kubelet/pods/0c7d881b-8764-42f1-a4db-87cde90a3a70/volumes" Jan 29 08:44:23 crc kubenswrapper[5031]: I0129 08:44:23.161567 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:23 crc kubenswrapper[5031]: I0129 08:44:23.167753 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.636789 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x66vc"] Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.637780 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.649605 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x66vc"] Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784413 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae129437-7b1d-4853-b96e-65244b3749bd-registry-certificates\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784497 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-bound-sa-token\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784545 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-registry-tls\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784588 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57bbq\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-kube-api-access-57bbq\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784628 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784662 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae129437-7b1d-4853-b96e-65244b3749bd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784686 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae129437-7b1d-4853-b96e-65244b3749bd-trusted-ca\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.784719 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae129437-7b1d-4853-b96e-65244b3749bd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.807512 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.886146 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae129437-7b1d-4853-b96e-65244b3749bd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.886563 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae129437-7b1d-4853-b96e-65244b3749bd-trusted-ca\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.886603 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae129437-7b1d-4853-b96e-65244b3749bd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.886647 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae129437-7b1d-4853-b96e-65244b3749bd-registry-certificates\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.886702 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-bound-sa-token\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.886727 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-registry-tls\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.886774 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57bbq\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-kube-api-access-57bbq\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.887490 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae129437-7b1d-4853-b96e-65244b3749bd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.888047 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae129437-7b1d-4853-b96e-65244b3749bd-registry-certificates\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.889642 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae129437-7b1d-4853-b96e-65244b3749bd-trusted-ca\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.892484 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae129437-7b1d-4853-b96e-65244b3749bd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.893234 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-registry-tls\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.903041 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-bound-sa-token\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.904397 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57bbq\" (UniqueName: \"kubernetes.io/projected/ae129437-7b1d-4853-b96e-65244b3749bd-kube-api-access-57bbq\") pod \"image-registry-66df7c8f76-x66vc\" (UID: \"ae129437-7b1d-4853-b96e-65244b3749bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:26 crc kubenswrapper[5031]: I0129 08:44:26.957669 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:27 crc kubenswrapper[5031]: I0129 08:44:27.511083 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x66vc"] Jan 29 08:44:28 crc kubenswrapper[5031]: I0129 08:44:28.194659 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" event={"ID":"ae129437-7b1d-4853-b96e-65244b3749bd","Type":"ContainerStarted","Data":"4244d8f66c151d2b1a46ee415755695f3c0942480a56f3302df17bf547364926"} Jan 29 08:44:28 crc kubenswrapper[5031]: I0129 08:44:28.194706 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" event={"ID":"ae129437-7b1d-4853-b96e-65244b3749bd","Type":"ContainerStarted","Data":"41da7c3bcb4bf21ddac8a78747daf340f6a046f7aff3db7455b797a6778614d7"} Jan 29 08:44:28 crc kubenswrapper[5031]: I0129 08:44:28.196068 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:28 crc kubenswrapper[5031]: I0129 08:44:28.215442 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" podStartSLOduration=2.215423435 podStartE2EDuration="2.215423435s" podCreationTimestamp="2026-01-29 08:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:44:28.212683047 +0000 UTC m=+348.712270989" watchObservedRunningTime="2026-01-29 08:44:28.215423435 +0000 UTC m=+348.715011387" Jan 29 08:44:38 crc kubenswrapper[5031]: I0129 08:44:38.493656 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:44:38 crc kubenswrapper[5031]: I0129 08:44:38.494293 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:46 crc kubenswrapper[5031]: I0129 08:44:46.964931 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-x66vc" Jan 29 08:44:47 crc kubenswrapper[5031]: I0129 08:44:47.020721 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll2lx"] Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.171895 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb"] Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.173489 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.175775 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.177992 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.189552 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb"] Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.273679 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-config-volume\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.274062 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z686j\" (UniqueName: \"kubernetes.io/projected/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-kube-api-access-z686j\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.274090 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-secret-volume\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.375446 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-config-volume\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.375516 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z686j\" (UniqueName: \"kubernetes.io/projected/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-kube-api-access-z686j\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.375554 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-secret-volume\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.376428 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-config-volume\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.384643 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-secret-volume\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.394297 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z686j\" (UniqueName: \"kubernetes.io/projected/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-kube-api-access-z686j\") pod \"collect-profiles-29494605-kzbwb\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.490923 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:00 crc kubenswrapper[5031]: I0129 08:45:00.898864 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb"] Jan 29 08:45:01 crc kubenswrapper[5031]: I0129 08:45:01.367851 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" event={"ID":"0c0a310f-43ae-4cfe-abfd-d90b36b691ec","Type":"ContainerStarted","Data":"4da9e29c632601898d8ee1ba070040d7cb54dcdc6c5a971500f9c890942ac9ef"} Jan 29 08:45:01 crc kubenswrapper[5031]: I0129 08:45:01.367899 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" event={"ID":"0c0a310f-43ae-4cfe-abfd-d90b36b691ec","Type":"ContainerStarted","Data":"685f24a496b2926d81a0b15adad27b7377e1165dced0771ec8feffb2ec8a653a"} Jan 29 08:45:01 crc kubenswrapper[5031]: I0129 08:45:01.386982 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" podStartSLOduration=1.386961566 podStartE2EDuration="1.386961566s" podCreationTimestamp="2026-01-29 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:45:01.384075465 +0000 UTC m=+381.883663427" watchObservedRunningTime="2026-01-29 08:45:01.386961566 +0000 UTC m=+381.886549518" Jan 29 08:45:02 crc kubenswrapper[5031]: I0129 08:45:02.374959 5031 generic.go:334] "Generic (PLEG): container finished" podID="0c0a310f-43ae-4cfe-abfd-d90b36b691ec" containerID="4da9e29c632601898d8ee1ba070040d7cb54dcdc6c5a971500f9c890942ac9ef" exitCode=0 Jan 29 08:45:02 crc kubenswrapper[5031]: I0129 08:45:02.375049 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" event={"ID":"0c0a310f-43ae-4cfe-abfd-d90b36b691ec","Type":"ContainerDied","Data":"4da9e29c632601898d8ee1ba070040d7cb54dcdc6c5a971500f9c890942ac9ef"} Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.627293 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.723682 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-config-volume\") pod \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.723762 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-secret-volume\") pod \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.723826 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z686j\" (UniqueName: \"kubernetes.io/projected/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-kube-api-access-z686j\") pod \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\" (UID: \"0c0a310f-43ae-4cfe-abfd-d90b36b691ec\") " Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.724392 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c0a310f-43ae-4cfe-abfd-d90b36b691ec" (UID: "0c0a310f-43ae-4cfe-abfd-d90b36b691ec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.729032 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c0a310f-43ae-4cfe-abfd-d90b36b691ec" (UID: "0c0a310f-43ae-4cfe-abfd-d90b36b691ec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.729279 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-kube-api-access-z686j" (OuterVolumeSpecName: "kube-api-access-z686j") pod "0c0a310f-43ae-4cfe-abfd-d90b36b691ec" (UID: "0c0a310f-43ae-4cfe-abfd-d90b36b691ec"). InnerVolumeSpecName "kube-api-access-z686j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.826454 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z686j\" (UniqueName: \"kubernetes.io/projected/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-kube-api-access-z686j\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.826494 5031 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:03 crc kubenswrapper[5031]: I0129 08:45:03.826509 5031 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c0a310f-43ae-4cfe-abfd-d90b36b691ec-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:04 crc kubenswrapper[5031]: I0129 08:45:04.389427 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" event={"ID":"0c0a310f-43ae-4cfe-abfd-d90b36b691ec","Type":"ContainerDied","Data":"685f24a496b2926d81a0b15adad27b7377e1165dced0771ec8feffb2ec8a653a"} Jan 29 08:45:04 crc kubenswrapper[5031]: I0129 08:45:04.389476 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685f24a496b2926d81a0b15adad27b7377e1165dced0771ec8feffb2ec8a653a" Jan 29 08:45:04 crc kubenswrapper[5031]: I0129 08:45:04.389548 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb" Jan 29 08:45:08 crc kubenswrapper[5031]: I0129 08:45:08.494544 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:45:08 crc kubenswrapper[5031]: I0129 08:45:08.495187 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.060242 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" podUID="7dee0d39-2211-4219-a780-bcf29f69425a" containerName="registry" containerID="cri-o://e0a5ec387534c6c1f6e123a5e5a6096bee1f79108d65e43d62a5f84acc47eabc" gracePeriod=30 Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.429864 5031 generic.go:334] "Generic (PLEG): container finished" podID="7dee0d39-2211-4219-a780-bcf29f69425a" containerID="e0a5ec387534c6c1f6e123a5e5a6096bee1f79108d65e43d62a5f84acc47eabc" exitCode=0 Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.429971 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" event={"ID":"7dee0d39-2211-4219-a780-bcf29f69425a","Type":"ContainerDied","Data":"e0a5ec387534c6c1f6e123a5e5a6096bee1f79108d65e43d62a5f84acc47eabc"} Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.522164 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dflqz"] Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.522583 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dflqz" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="registry-server" containerID="cri-o://465b0621d9456cf54c5d343743066e0a78ef8efc898c7284558d4b1a216daa9e" gracePeriod=30 Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.529609 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5f9r7"] Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.529826 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5f9r7" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="registry-server" containerID="cri-o://b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec" gracePeriod=30 Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.536770 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r78xm"] Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.536966 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerName="marketplace-operator" containerID="cri-o://025168d9d6d0200cf18b7855e8b0d0d7a89a39941108b5db0b73482758ed6059" gracePeriod=30 Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.552716 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hg9"] Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.553005 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9hg9" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="registry-server" containerID="cri-o://31c7b22294bc0e63cbd99f735a6fd8ff6b8e792b1d9219e202aec6489a751de4" gracePeriod=30 Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.558446 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qjfs"] Jan 29 08:45:12 crc kubenswrapper[5031]: E0129 08:45:12.558706 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0a310f-43ae-4cfe-abfd-d90b36b691ec" containerName="collect-profiles" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.558729 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0a310f-43ae-4cfe-abfd-d90b36b691ec" containerName="collect-profiles" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.558852 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0a310f-43ae-4cfe-abfd-d90b36b691ec" containerName="collect-profiles" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.559319 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.565838 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-627gc"] Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.566121 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-627gc" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="registry-server" containerID="cri-o://5abc398bc8b1311e459ee44497f35a956c858c07b13e3bfe0aadba53c8fb58cd" gracePeriod=30 Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.576348 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qjfs"] Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.667806 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmn29\" (UniqueName: \"kubernetes.io/projected/75a63559-30d6-47bc-9f30-5385de9826f0-kube-api-access-zmn29\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.668083 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75a63559-30d6-47bc-9f30-5385de9826f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.668240 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a63559-30d6-47bc-9f30-5385de9826f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.770352 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75a63559-30d6-47bc-9f30-5385de9826f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.770513 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a63559-30d6-47bc-9f30-5385de9826f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.770608 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmn29\" (UniqueName: \"kubernetes.io/projected/75a63559-30d6-47bc-9f30-5385de9826f0-kube-api-access-zmn29\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.773738 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a63559-30d6-47bc-9f30-5385de9826f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.782191 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75a63559-30d6-47bc-9f30-5385de9826f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.786451 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmn29\" (UniqueName: \"kubernetes.io/projected/75a63559-30d6-47bc-9f30-5385de9826f0-kube-api-access-zmn29\") pod \"marketplace-operator-79b997595-4qjfs\" (UID: \"75a63559-30d6-47bc-9f30-5385de9826f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:12 crc kubenswrapper[5031]: I0129 08:45:12.888234 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.034421 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.129004 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.173265 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qjfs"] Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.174909 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-utilities\") pod \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.175031 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-catalog-content\") pod \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.175064 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55g95\" (UniqueName: \"kubernetes.io/projected/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-kube-api-access-55g95\") pod \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\" (UID: \"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.179092 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-utilities" (OuterVolumeSpecName: "utilities") pod "c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" (UID: "c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.182347 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-kube-api-access-55g95" (OuterVolumeSpecName: "kube-api-access-55g95") pod "c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" (UID: "c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b"). InnerVolumeSpecName "kube-api-access-55g95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.278820 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmhdb\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-kube-api-access-jmhdb\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.279474 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7dee0d39-2211-4219-a780-bcf29f69425a-ca-trust-extracted\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.279618 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-registry-certificates\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.279715 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-registry-tls\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.279830 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-trusted-ca\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.279996 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7dee0d39-2211-4219-a780-bcf29f69425a-installation-pull-secrets\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.280104 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-bound-sa-token\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.280413 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7dee0d39-2211-4219-a780-bcf29f69425a\" (UID: \"7dee0d39-2211-4219-a780-bcf29f69425a\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.280833 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.280938 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55g95\" (UniqueName: \"kubernetes.io/projected/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-kube-api-access-55g95\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.282267 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.286877 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" (UID: "c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.294782 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.297184 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.299815 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-kube-api-access-jmhdb" (OuterVolumeSpecName: "kube-api-access-jmhdb") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "kube-api-access-jmhdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.301034 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.301319 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dee0d39-2211-4219-a780-bcf29f69425a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.312108 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dee0d39-2211-4219-a780-bcf29f69425a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.345963 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7dee0d39-2211-4219-a780-bcf29f69425a" (UID: "7dee0d39-2211-4219-a780-bcf29f69425a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381823 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381854 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381866 5031 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7dee0d39-2211-4219-a780-bcf29f69425a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381875 5031 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381884 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmhdb\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-kube-api-access-jmhdb\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381893 5031 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7dee0d39-2211-4219-a780-bcf29f69425a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381901 5031 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7dee0d39-2211-4219-a780-bcf29f69425a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.381909 5031 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7dee0d39-2211-4219-a780-bcf29f69425a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.437022 5031 generic.go:334] "Generic (PLEG): container finished" podID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerID="31c7b22294bc0e63cbd99f735a6fd8ff6b8e792b1d9219e202aec6489a751de4" exitCode=0 Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.437085 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hg9" event={"ID":"ad4a529c-a8ab-47c5-84cd-44002bebb7ce","Type":"ContainerDied","Data":"31c7b22294bc0e63cbd99f735a6fd8ff6b8e792b1d9219e202aec6489a751de4"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.438598 5031 generic.go:334] "Generic (PLEG): container finished" podID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerID="465b0621d9456cf54c5d343743066e0a78ef8efc898c7284558d4b1a216daa9e" exitCode=0 Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.438687 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerDied","Data":"465b0621d9456cf54c5d343743066e0a78ef8efc898c7284558d4b1a216daa9e"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.440832 5031 generic.go:334] "Generic (PLEG): container finished" podID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerID="b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec" exitCode=0 Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.440879 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f9r7" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.440914 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerDied","Data":"b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.441124 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f9r7" event={"ID":"c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b","Type":"ContainerDied","Data":"10b1d35c8691db7c915a494fa24bf26c3c590d27f8bd3fd6eda648f91de0b949"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.441151 5031 scope.go:117] "RemoveContainer" containerID="b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.443181 5031 generic.go:334] "Generic (PLEG): container finished" podID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerID="5abc398bc8b1311e459ee44497f35a956c858c07b13e3bfe0aadba53c8fb58cd" exitCode=0 Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.443358 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-627gc" event={"ID":"dd2c0807-7bcf-435a-8961-fdef958e6c53","Type":"ContainerDied","Data":"5abc398bc8b1311e459ee44497f35a956c858c07b13e3bfe0aadba53c8fb58cd"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.444626 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" event={"ID":"75a63559-30d6-47bc-9f30-5385de9826f0","Type":"ContainerStarted","Data":"31123f3e8d13e0adb159ee3e1c9d7b92b8125179ebd12655decb8d86b8feacd8"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.446195 5031 generic.go:334] "Generic (PLEG): container finished" podID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerID="025168d9d6d0200cf18b7855e8b0d0d7a89a39941108b5db0b73482758ed6059" exitCode=0 Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.446260 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" event={"ID":"dd3a139e-483b-41e7-ac87-3d3a0f86a059","Type":"ContainerDied","Data":"025168d9d6d0200cf18b7855e8b0d0d7a89a39941108b5db0b73482758ed6059"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.447710 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" event={"ID":"7dee0d39-2211-4219-a780-bcf29f69425a","Type":"ContainerDied","Data":"19145aecd6523579c39007e1366095f2e4984fbbe91c6d42177b75ce026d8958"} Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.447752 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ll2lx" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.457266 5031 scope.go:117] "RemoveContainer" containerID="8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.478531 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5f9r7"] Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.483771 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5f9r7"] Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.489070 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll2lx"] Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.499501 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ll2lx"] Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.501629 5031 scope.go:117] "RemoveContainer" containerID="7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.516991 5031 scope.go:117] "RemoveContainer" containerID="b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec" Jan 29 08:45:13 crc kubenswrapper[5031]: E0129 08:45:13.517512 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec\": container with ID starting with b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec not found: ID does not exist" containerID="b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.517548 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec"} err="failed to get container status \"b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec\": rpc error: code = NotFound desc = could not find container \"b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec\": container with ID starting with b6d994cb0b3e6f4726ceb7c2385eb4ceaf3dc1b8e983d1e3758fec771694ceec not found: ID does not exist" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.517579 5031 scope.go:117] "RemoveContainer" containerID="8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58" Jan 29 08:45:13 crc kubenswrapper[5031]: E0129 08:45:13.518126 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58\": container with ID starting with 8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58 not found: ID does not exist" containerID="8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.518152 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58"} err="failed to get container status \"8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58\": rpc error: code = NotFound desc = could not find container \"8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58\": container with ID starting with 8c85f7eea92b0e4a55cda52626c869b8ed91d1bb4cd2f854e3f605bf1a7e2a58 not found: ID does not exist" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.518181 5031 scope.go:117] "RemoveContainer" containerID="7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad" Jan 29 08:45:13 crc kubenswrapper[5031]: E0129 08:45:13.518856 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad\": container with ID starting with 7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad not found: ID does not exist" containerID="7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.518929 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad"} err="failed to get container status \"7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad\": rpc error: code = NotFound desc = could not find container \"7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad\": container with ID starting with 7d73b4e244f135e5a526a0fe813906639a890dc16e8f4b5adccf227ee011bcad not found: ID does not exist" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.518978 5031 scope.go:117] "RemoveContainer" containerID="e0a5ec387534c6c1f6e123a5e5a6096bee1f79108d65e43d62a5f84acc47eabc" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.846195 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.998894 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-catalog-content\") pod \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.999380 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vjsx\" (UniqueName: \"kubernetes.io/projected/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-kube-api-access-5vjsx\") pod \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " Jan 29 08:45:13 crc kubenswrapper[5031]: I0129 08:45:13.999437 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-utilities\") pod \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\" (UID: \"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.000552 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-utilities" (OuterVolumeSpecName: "utilities") pod "1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" (UID: "1fe2f9cf-9f00-48da-849a-29aa4b0e66ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.012787 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-kube-api-access-5vjsx" (OuterVolumeSpecName: "kube-api-access-5vjsx") pod "1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" (UID: "1fe2f9cf-9f00-48da-849a-29aa4b0e66ec"). InnerVolumeSpecName "kube-api-access-5vjsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.063317 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" (UID: "1fe2f9cf-9f00-48da-849a-29aa4b0e66ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.101455 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.101500 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.101519 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vjsx\" (UniqueName: \"kubernetes.io/projected/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec-kube-api-access-5vjsx\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.258859 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.289935 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dee0d39-2211-4219-a780-bcf29f69425a" path="/var/lib/kubelet/pods/7dee0d39-2211-4219-a780-bcf29f69425a/volumes" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.290630 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" path="/var/lib/kubelet/pods/c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b/volumes" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.336522 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.404608 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-utilities\") pod \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.404687 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqbdv\" (UniqueName: \"kubernetes.io/projected/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-kube-api-access-pqbdv\") pod \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.404750 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-catalog-content\") pod \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\" (UID: \"ad4a529c-a8ab-47c5-84cd-44002bebb7ce\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.405618 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-utilities" (OuterVolumeSpecName: "utilities") pod "ad4a529c-a8ab-47c5-84cd-44002bebb7ce" (UID: "ad4a529c-a8ab-47c5-84cd-44002bebb7ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.408452 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-kube-api-access-pqbdv" (OuterVolumeSpecName: "kube-api-access-pqbdv") pod "ad4a529c-a8ab-47c5-84cd-44002bebb7ce" (UID: "ad4a529c-a8ab-47c5-84cd-44002bebb7ce"). InnerVolumeSpecName "kube-api-access-pqbdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.426622 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad4a529c-a8ab-47c5-84cd-44002bebb7ce" (UID: "ad4a529c-a8ab-47c5-84cd-44002bebb7ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.460045 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" event={"ID":"75a63559-30d6-47bc-9f30-5385de9826f0","Type":"ContainerStarted","Data":"880e87bf6a3345b73f354b5647c6f0ce9df35d9cc16160c303c0e98573b821c8"} Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.461566 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" event={"ID":"dd3a139e-483b-41e7-ac87-3d3a0f86a059","Type":"ContainerDied","Data":"84531bf4e0cbc37793a052ef9f62af1b406dd57201463dee919951d4bfe86400"} Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.461621 5031 scope.go:117] "RemoveContainer" containerID="025168d9d6d0200cf18b7855e8b0d0d7a89a39941108b5db0b73482758ed6059" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.461631 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r78xm" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.475265 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hg9" event={"ID":"ad4a529c-a8ab-47c5-84cd-44002bebb7ce","Type":"ContainerDied","Data":"6f7c84a84146ec1bf5386600a2f6c41ebd9227c86b95feb0ef8d1d8f458133e5"} Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.475383 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hg9" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.485719 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dflqz" event={"ID":"1fe2f9cf-9f00-48da-849a-29aa4b0e66ec","Type":"ContainerDied","Data":"099f58b3e490a6c36a2c50c379f8e5ea70e8c0d1ffdb1ad37e60b70e03dd103d"} Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.485808 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dflqz" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.501592 5031 scope.go:117] "RemoveContainer" containerID="31c7b22294bc0e63cbd99f735a6fd8ff6b8e792b1d9219e202aec6489a751de4" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.505465 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-operator-metrics\") pod \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.505542 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-trusted-ca\") pod \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.505597 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frmwl\" (UniqueName: \"kubernetes.io/projected/dd3a139e-483b-41e7-ac87-3d3a0f86a059-kube-api-access-frmwl\") pod \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\" (UID: \"dd3a139e-483b-41e7-ac87-3d3a0f86a059\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.505813 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.505832 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqbdv\" (UniqueName: \"kubernetes.io/projected/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-kube-api-access-pqbdv\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.505844 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4a529c-a8ab-47c5-84cd-44002bebb7ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.506750 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dd3a139e-483b-41e7-ac87-3d3a0f86a059" (UID: "dd3a139e-483b-41e7-ac87-3d3a0f86a059"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.510079 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dd3a139e-483b-41e7-ac87-3d3a0f86a059" (UID: "dd3a139e-483b-41e7-ac87-3d3a0f86a059"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.511136 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3a139e-483b-41e7-ac87-3d3a0f86a059-kube-api-access-frmwl" (OuterVolumeSpecName: "kube-api-access-frmwl") pod "dd3a139e-483b-41e7-ac87-3d3a0f86a059" (UID: "dd3a139e-483b-41e7-ac87-3d3a0f86a059"). InnerVolumeSpecName "kube-api-access-frmwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.515427 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dflqz"] Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.519646 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dflqz"] Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.528169 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hg9"] Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.528533 5031 scope.go:117] "RemoveContainer" containerID="769fad034c616df288b92cbe18e36914a2ee51fc869337ab7bd252a7512be42d" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.541817 5031 scope.go:117] "RemoveContainer" containerID="1dc033b9449e6c9edc85f4a5a1b39e291b6354296db200a517af8860f10c572b" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.542541 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hg9"] Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.558645 5031 scope.go:117] "RemoveContainer" containerID="465b0621d9456cf54c5d343743066e0a78ef8efc898c7284558d4b1a216daa9e" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.570315 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.579304 5031 scope.go:117] "RemoveContainer" containerID="b9b10315bd2a7338e7ca30b9cd6742ec86369d1b5fd95ea6b2a0dea0c4f662ff" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.606689 5031 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.606723 5031 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd3a139e-483b-41e7-ac87-3d3a0f86a059-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.606732 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frmwl\" (UniqueName: \"kubernetes.io/projected/dd3a139e-483b-41e7-ac87-3d3a0f86a059-kube-api-access-frmwl\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.613909 5031 scope.go:117] "RemoveContainer" containerID="31bc60f84331edf2f10a3054cb8828016e4ea5e3c1b40c56cb836bdebe1372eb" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.707325 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-utilities\") pod \"dd2c0807-7bcf-435a-8961-fdef958e6c53\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.707647 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-catalog-content\") pod \"dd2c0807-7bcf-435a-8961-fdef958e6c53\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.707673 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lzg6\" (UniqueName: \"kubernetes.io/projected/dd2c0807-7bcf-435a-8961-fdef958e6c53-kube-api-access-5lzg6\") pod \"dd2c0807-7bcf-435a-8961-fdef958e6c53\" (UID: \"dd2c0807-7bcf-435a-8961-fdef958e6c53\") " Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.708028 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-utilities" (OuterVolumeSpecName: "utilities") pod "dd2c0807-7bcf-435a-8961-fdef958e6c53" (UID: "dd2c0807-7bcf-435a-8961-fdef958e6c53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.710690 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2c0807-7bcf-435a-8961-fdef958e6c53-kube-api-access-5lzg6" (OuterVolumeSpecName: "kube-api-access-5lzg6") pod "dd2c0807-7bcf-435a-8961-fdef958e6c53" (UID: "dd2c0807-7bcf-435a-8961-fdef958e6c53"). InnerVolumeSpecName "kube-api-access-5lzg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.791956 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r78xm"] Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.796908 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r78xm"] Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.808907 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.808947 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lzg6\" (UniqueName: \"kubernetes.io/projected/dd2c0807-7bcf-435a-8961-fdef958e6c53-kube-api-access-5lzg6\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.823415 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd2c0807-7bcf-435a-8961-fdef958e6c53" (UID: "dd2c0807-7bcf-435a-8961-fdef958e6c53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.910535 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c0807-7bcf-435a-8961-fdef958e6c53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931628 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cr7rh"] Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931822 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931834 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931845 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931851 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931860 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931866 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931873 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931879 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931886 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931892 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931904 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931911 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931922 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dee0d39-2211-4219-a780-bcf29f69425a" containerName="registry" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931929 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dee0d39-2211-4219-a780-bcf29f69425a" containerName="registry" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931940 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931946 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931964 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerName="marketplace-operator" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931970 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerName="marketplace-operator" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931977 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931983 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.931990 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.931995 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.932005 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932013 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="extract-utilities" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.932021 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932027 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="extract-content" Jan 29 08:45:14 crc kubenswrapper[5031]: E0129 08:45:14.932034 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932040 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932129 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" containerName="marketplace-operator" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932143 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932152 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932160 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dee0d39-2211-4219-a780-bcf29f69425a" containerName="registry" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932171 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b3db00-a7fa-4d2d-a1ef-02d5bab5714b" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932183 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" containerName="registry-server" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.932862 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.937561 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 08:45:14 crc kubenswrapper[5031]: I0129 08:45:14.942128 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cr7rh"] Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.012080 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb02be63-04db-40b0-9f74-892cec88b048-catalog-content\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.012151 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ll4b\" (UniqueName: \"kubernetes.io/projected/cb02be63-04db-40b0-9f74-892cec88b048-kube-api-access-9ll4b\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.012208 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb02be63-04db-40b0-9f74-892cec88b048-utilities\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.113203 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb02be63-04db-40b0-9f74-892cec88b048-catalog-content\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.113271 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ll4b\" (UniqueName: \"kubernetes.io/projected/cb02be63-04db-40b0-9f74-892cec88b048-kube-api-access-9ll4b\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.113327 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb02be63-04db-40b0-9f74-892cec88b048-utilities\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.113948 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb02be63-04db-40b0-9f74-892cec88b048-catalog-content\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.114049 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb02be63-04db-40b0-9f74-892cec88b048-utilities\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.130261 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ll4b\" (UniqueName: \"kubernetes.io/projected/cb02be63-04db-40b0-9f74-892cec88b048-kube-api-access-9ll4b\") pod \"community-operators-cr7rh\" (UID: \"cb02be63-04db-40b0-9f74-892cec88b048\") " pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.282523 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.462242 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cr7rh"] Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.491395 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr7rh" event={"ID":"cb02be63-04db-40b0-9f74-892cec88b048","Type":"ContainerStarted","Data":"d0f18a3a6beae17ce0170b674dd074cb5b7729fa64cc5e71c0c63e0c4d73eaef"} Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.496721 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-627gc" event={"ID":"dd2c0807-7bcf-435a-8961-fdef958e6c53","Type":"ContainerDied","Data":"404f8b03daa95847aa8806c038a0f0a0214664a790cbbb2c8ed546f4796f04eb"} Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.496752 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-627gc" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.496774 5031 scope.go:117] "RemoveContainer" containerID="5abc398bc8b1311e459ee44497f35a956c858c07b13e3bfe0aadba53c8fb58cd" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.499468 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.505517 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.519623 5031 scope.go:117] "RemoveContainer" containerID="beedc2dab719895280630153be970a6a1bb772d6bc677c6035175a7374387226" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.521176 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4qjfs" podStartSLOduration=3.521155945 podStartE2EDuration="3.521155945s" podCreationTimestamp="2026-01-29 08:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:45:15.518756236 +0000 UTC m=+396.018344188" watchObservedRunningTime="2026-01-29 08:45:15.521155945 +0000 UTC m=+396.020743917" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.548415 5031 scope.go:117] "RemoveContainer" containerID="103951b064d0fedffe647c9143ad0e7ba07707771488eb0a477c04afb69a92cf" Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.559540 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-627gc"] Jan 29 08:45:15 crc kubenswrapper[5031]: I0129 08:45:15.564248 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-627gc"] Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.288749 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe2f9cf-9f00-48da-849a-29aa4b0e66ec" path="/var/lib/kubelet/pods/1fe2f9cf-9f00-48da-849a-29aa4b0e66ec/volumes" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.289474 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad4a529c-a8ab-47c5-84cd-44002bebb7ce" path="/var/lib/kubelet/pods/ad4a529c-a8ab-47c5-84cd-44002bebb7ce/volumes" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.290031 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2c0807-7bcf-435a-8961-fdef958e6c53" path="/var/lib/kubelet/pods/dd2c0807-7bcf-435a-8961-fdef958e6c53/volumes" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.291183 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3a139e-483b-41e7-ac87-3d3a0f86a059" path="/var/lib/kubelet/pods/dd3a139e-483b-41e7-ac87-3d3a0f86a059/volumes" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.519328 5031 generic.go:334] "Generic (PLEG): container finished" podID="cb02be63-04db-40b0-9f74-892cec88b048" containerID="d089f4ae18632e8328652003635317fb6423344d9704c57c59ed1dd77e979f80" exitCode=0 Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.520634 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr7rh" event={"ID":"cb02be63-04db-40b0-9f74-892cec88b048","Type":"ContainerDied","Data":"d089f4ae18632e8328652003635317fb6423344d9704c57c59ed1dd77e979f80"} Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.747564 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vlv"] Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.748871 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.752757 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vlv"] Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.755075 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.839992 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2928c877-fb1d-41fa-9324-13efccbca747-catalog-content\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.840103 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9cp7\" (UniqueName: \"kubernetes.io/projected/2928c877-fb1d-41fa-9324-13efccbca747-kube-api-access-c9cp7\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.840162 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2928c877-fb1d-41fa-9324-13efccbca747-utilities\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.941476 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2928c877-fb1d-41fa-9324-13efccbca747-catalog-content\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.941585 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9cp7\" (UniqueName: \"kubernetes.io/projected/2928c877-fb1d-41fa-9324-13efccbca747-kube-api-access-c9cp7\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.941661 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2928c877-fb1d-41fa-9324-13efccbca747-utilities\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.942429 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2928c877-fb1d-41fa-9324-13efccbca747-catalog-content\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.942509 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2928c877-fb1d-41fa-9324-13efccbca747-utilities\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:16 crc kubenswrapper[5031]: I0129 08:45:16.971978 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9cp7\" (UniqueName: \"kubernetes.io/projected/2928c877-fb1d-41fa-9324-13efccbca747-kube-api-access-c9cp7\") pod \"redhat-marketplace-m4vlv\" (UID: \"2928c877-fb1d-41fa-9324-13efccbca747\") " pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.064597 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.342810 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kr6tb"] Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.344486 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.347952 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.350762 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kr6tb"] Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.447884 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzbl\" (UniqueName: \"kubernetes.io/projected/73a47626-7d91-4369-a5f0-75aba46b4f34-kube-api-access-jmzbl\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.447942 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a47626-7d91-4369-a5f0-75aba46b4f34-catalog-content\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.448027 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a47626-7d91-4369-a5f0-75aba46b4f34-utilities\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.482010 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vlv"] Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.537142 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vlv" event={"ID":"2928c877-fb1d-41fa-9324-13efccbca747","Type":"ContainerStarted","Data":"8d24eba92b3f5814ffdb7c596a66716233879971f7c46683926ba80314cc4fca"} Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.549299 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a47626-7d91-4369-a5f0-75aba46b4f34-utilities\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.549348 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzbl\" (UniqueName: \"kubernetes.io/projected/73a47626-7d91-4369-a5f0-75aba46b4f34-kube-api-access-jmzbl\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.549399 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a47626-7d91-4369-a5f0-75aba46b4f34-catalog-content\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.550151 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a47626-7d91-4369-a5f0-75aba46b4f34-catalog-content\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.550158 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a47626-7d91-4369-a5f0-75aba46b4f34-utilities\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.568962 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzbl\" (UniqueName: \"kubernetes.io/projected/73a47626-7d91-4369-a5f0-75aba46b4f34-kube-api-access-jmzbl\") pod \"redhat-operators-kr6tb\" (UID: \"73a47626-7d91-4369-a5f0-75aba46b4f34\") " pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:17 crc kubenswrapper[5031]: I0129 08:45:17.664949 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:18 crc kubenswrapper[5031]: I0129 08:45:18.544532 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr7rh" event={"ID":"cb02be63-04db-40b0-9f74-892cec88b048","Type":"ContainerStarted","Data":"4d772b7824bec8a4a46a097eff2cc3c255ed49bb829b817c03a5748af6c64dde"} Jan 29 08:45:18 crc kubenswrapper[5031]: I0129 08:45:18.546164 5031 generic.go:334] "Generic (PLEG): container finished" podID="2928c877-fb1d-41fa-9324-13efccbca747" containerID="03b20291726e4d0938167b81c2fef348f7dfda5fd7c9b41f6fa34ecf958af954" exitCode=0 Jan 29 08:45:18 crc kubenswrapper[5031]: I0129 08:45:18.546211 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vlv" event={"ID":"2928c877-fb1d-41fa-9324-13efccbca747","Type":"ContainerDied","Data":"03b20291726e4d0938167b81c2fef348f7dfda5fd7c9b41f6fa34ecf958af954"} Jan 29 08:45:18 crc kubenswrapper[5031]: I0129 08:45:18.564023 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kr6tb"] Jan 29 08:45:18 crc kubenswrapper[5031]: W0129 08:45:18.574459 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73a47626_7d91_4369_a5f0_75aba46b4f34.slice/crio-20d46503c3a0e80b1631789c1b4c585930308f2e1d6aad0b9841c9cc224a0191 WatchSource:0}: Error finding container 20d46503c3a0e80b1631789c1b4c585930308f2e1d6aad0b9841c9cc224a0191: Status 404 returned error can't find the container with id 20d46503c3a0e80b1631789c1b4c585930308f2e1d6aad0b9841c9cc224a0191 Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.135658 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8c5v8"] Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.138021 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.142795 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.145722 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c5v8"] Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.271331 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-catalog-content\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.271607 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-kube-api-access-nsf24\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.271856 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-utilities\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.373307 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-utilities\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.373389 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-catalog-content\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.373449 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-kube-api-access-nsf24\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.373978 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-utilities\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.374013 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-catalog-content\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.392190 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-kube-api-access-nsf24\") pod \"certified-operators-8c5v8\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.462143 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.554649 5031 generic.go:334] "Generic (PLEG): container finished" podID="73a47626-7d91-4369-a5f0-75aba46b4f34" containerID="3b106b2b2d8b760788dcc2ea5b47df107c97a37c5670e0b611691d906410493f" exitCode=0 Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.555009 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr6tb" event={"ID":"73a47626-7d91-4369-a5f0-75aba46b4f34","Type":"ContainerDied","Data":"3b106b2b2d8b760788dcc2ea5b47df107c97a37c5670e0b611691d906410493f"} Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.555073 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr6tb" event={"ID":"73a47626-7d91-4369-a5f0-75aba46b4f34","Type":"ContainerStarted","Data":"20d46503c3a0e80b1631789c1b4c585930308f2e1d6aad0b9841c9cc224a0191"} Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.559027 5031 generic.go:334] "Generic (PLEG): container finished" podID="cb02be63-04db-40b0-9f74-892cec88b048" containerID="4d772b7824bec8a4a46a097eff2cc3c255ed49bb829b817c03a5748af6c64dde" exitCode=0 Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.559822 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr7rh" event={"ID":"cb02be63-04db-40b0-9f74-892cec88b048","Type":"ContainerDied","Data":"4d772b7824bec8a4a46a097eff2cc3c255ed49bb829b817c03a5748af6c64dde"} Jan 29 08:45:19 crc kubenswrapper[5031]: I0129 08:45:19.868535 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c5v8"] Jan 29 08:45:19 crc kubenswrapper[5031]: W0129 08:45:19.877878 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd56b5c_ccf1_4132_a5fe_c5d6ed8068e1.slice/crio-3c9ab45798d81a081b3ee629dbe6397f169f981aa0be5e4380b49b3c1ea54450 WatchSource:0}: Error finding container 3c9ab45798d81a081b3ee629dbe6397f169f981aa0be5e4380b49b3c1ea54450: Status 404 returned error can't find the container with id 3c9ab45798d81a081b3ee629dbe6397f169f981aa0be5e4380b49b3c1ea54450 Jan 29 08:45:20 crc kubenswrapper[5031]: I0129 08:45:20.569952 5031 generic.go:334] "Generic (PLEG): container finished" podID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerID="0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0" exitCode=0 Jan 29 08:45:20 crc kubenswrapper[5031]: I0129 08:45:20.570360 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c5v8" event={"ID":"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1","Type":"ContainerDied","Data":"0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0"} Jan 29 08:45:20 crc kubenswrapper[5031]: I0129 08:45:20.570414 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c5v8" event={"ID":"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1","Type":"ContainerStarted","Data":"3c9ab45798d81a081b3ee629dbe6397f169f981aa0be5e4380b49b3c1ea54450"} Jan 29 08:45:20 crc kubenswrapper[5031]: I0129 08:45:20.583930 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp"] Jan 29 08:45:20 crc kubenswrapper[5031]: I0129 08:45:20.584227 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" podUID="8f864087-7d0c-4b76-a02f-42ee04add66a" containerName="route-controller-manager" containerID="cri-o://7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d" gracePeriod=30 Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.458147 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.577252 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr7rh" event={"ID":"cb02be63-04db-40b0-9f74-892cec88b048","Type":"ContainerStarted","Data":"9b9e74816af8607932d953375f41118f4b74510b1e93c4472ff8110186c4439e"} Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.580995 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" event={"ID":"8f864087-7d0c-4b76-a02f-42ee04add66a","Type":"ContainerDied","Data":"7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d"} Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.581128 5031 scope.go:117] "RemoveContainer" containerID="7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.579261 5031 generic.go:334] "Generic (PLEG): container finished" podID="8f864087-7d0c-4b76-a02f-42ee04add66a" containerID="7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d" exitCode=0 Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.579297 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.581522 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" event={"ID":"8f864087-7d0c-4b76-a02f-42ee04add66a","Type":"ContainerDied","Data":"57d078b8f6e3b9a8ead9a82a4f0376b51f6384a2780f6f9463eb0b6d835a1deb"} Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.587045 5031 generic.go:334] "Generic (PLEG): container finished" podID="2928c877-fb1d-41fa-9324-13efccbca747" containerID="49ceca31aeca9d9b020e03db0465c2cb1da58101c72ee4c3e8daac595dd34199" exitCode=0 Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.587235 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vlv" event={"ID":"2928c877-fb1d-41fa-9324-13efccbca747","Type":"ContainerDied","Data":"49ceca31aeca9d9b020e03db0465c2cb1da58101c72ee4c3e8daac595dd34199"} Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.595710 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr6tb" event={"ID":"73a47626-7d91-4369-a5f0-75aba46b4f34","Type":"ContainerStarted","Data":"940cba953da5c565030647cc0dbce8de80ae1642f5606538a66af0400078417a"} Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.610726 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kch7\" (UniqueName: \"kubernetes.io/projected/8f864087-7d0c-4b76-a02f-42ee04add66a-kube-api-access-2kch7\") pod \"8f864087-7d0c-4b76-a02f-42ee04add66a\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.610837 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-config\") pod \"8f864087-7d0c-4b76-a02f-42ee04add66a\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.610887 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-client-ca\") pod \"8f864087-7d0c-4b76-a02f-42ee04add66a\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.610923 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f864087-7d0c-4b76-a02f-42ee04add66a-serving-cert\") pod \"8f864087-7d0c-4b76-a02f-42ee04add66a\" (UID: \"8f864087-7d0c-4b76-a02f-42ee04add66a\") " Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.612613 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f864087-7d0c-4b76-a02f-42ee04add66a" (UID: "8f864087-7d0c-4b76-a02f-42ee04add66a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.614420 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-config" (OuterVolumeSpecName: "config") pod "8f864087-7d0c-4b76-a02f-42ee04add66a" (UID: "8f864087-7d0c-4b76-a02f-42ee04add66a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.616246 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cr7rh" podStartSLOduration=3.267520078 podStartE2EDuration="7.616222402s" podCreationTimestamp="2026-01-29 08:45:14 +0000 UTC" firstStartedPulling="2026-01-29 08:45:16.522055498 +0000 UTC m=+397.021643450" lastFinishedPulling="2026-01-29 08:45:20.870757822 +0000 UTC m=+401.370345774" observedRunningTime="2026-01-29 08:45:21.596232205 +0000 UTC m=+402.095820157" watchObservedRunningTime="2026-01-29 08:45:21.616222402 +0000 UTC m=+402.115810364" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.616967 5031 scope.go:117] "RemoveContainer" containerID="7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.617243 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f864087-7d0c-4b76-a02f-42ee04add66a-kube-api-access-2kch7" (OuterVolumeSpecName: "kube-api-access-2kch7") pod "8f864087-7d0c-4b76-a02f-42ee04add66a" (UID: "8f864087-7d0c-4b76-a02f-42ee04add66a"). InnerVolumeSpecName "kube-api-access-2kch7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.618474 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f864087-7d0c-4b76-a02f-42ee04add66a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f864087-7d0c-4b76-a02f-42ee04add66a" (UID: "8f864087-7d0c-4b76-a02f-42ee04add66a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:21 crc kubenswrapper[5031]: E0129 08:45:21.624121 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d\": container with ID starting with 7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d not found: ID does not exist" containerID="7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.624157 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d"} err="failed to get container status \"7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d\": rpc error: code = NotFound desc = could not find container \"7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d\": container with ID starting with 7ee7f5e84db5dc8570b9d9b31360b71dc4a2d743f93dc66df88a36fe25591d9d not found: ID does not exist" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.711967 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.712007 5031 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f864087-7d0c-4b76-a02f-42ee04add66a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.712017 5031 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f864087-7d0c-4b76-a02f-42ee04add66a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.712028 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kch7\" (UniqueName: \"kubernetes.io/projected/8f864087-7d0c-4b76-a02f-42ee04add66a-kube-api-access-2kch7\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.921871 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp"] Jan 29 08:45:21 crc kubenswrapper[5031]: I0129 08:45:21.925925 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp"] Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.064503 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg"] Jan 29 08:45:22 crc kubenswrapper[5031]: E0129 08:45:22.064711 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f864087-7d0c-4b76-a02f-42ee04add66a" containerName="route-controller-manager" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.064724 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f864087-7d0c-4b76-a02f-42ee04add66a" containerName="route-controller-manager" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.064821 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f864087-7d0c-4b76-a02f-42ee04add66a" containerName="route-controller-manager" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.065169 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.068314 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.068440 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.068313 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.068753 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.069303 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.069549 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.072614 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg"] Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.217502 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9chzb\" (UniqueName: \"kubernetes.io/projected/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-kube-api-access-9chzb\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.217901 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-client-ca\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.217932 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-config\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.217978 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-serving-cert\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.289594 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f864087-7d0c-4b76-a02f-42ee04add66a" path="/var/lib/kubelet/pods/8f864087-7d0c-4b76-a02f-42ee04add66a/volumes" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.319413 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-client-ca\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.319675 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-config\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.319948 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-serving-cert\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.319994 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9chzb\" (UniqueName: \"kubernetes.io/projected/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-kube-api-access-9chzb\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.320743 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-client-ca\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.320830 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-config\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.324948 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-serving-cert\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.336771 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9chzb\" (UniqueName: \"kubernetes.io/projected/f3904bd6-3c2b-4ba4-b47a-efb18448aecf-kube-api-access-9chzb\") pod \"route-controller-manager-869d5c4597-48vjg\" (UID: \"f3904bd6-3c2b-4ba4-b47a-efb18448aecf\") " pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.340490 5031 patch_prober.go:28] interesting pod/route-controller-manager-5658b8d798-fmnmp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: i/o timeout" start-of-body= Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.340550 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5658b8d798-fmnmp" podUID="8f864087-7d0c-4b76-a02f-42ee04add66a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: i/o timeout" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.385170 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.604695 5031 generic.go:334] "Generic (PLEG): container finished" podID="73a47626-7d91-4369-a5f0-75aba46b4f34" containerID="940cba953da5c565030647cc0dbce8de80ae1642f5606538a66af0400078417a" exitCode=0 Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.604782 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr6tb" event={"ID":"73a47626-7d91-4369-a5f0-75aba46b4f34","Type":"ContainerDied","Data":"940cba953da5c565030647cc0dbce8de80ae1642f5606538a66af0400078417a"} Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.617011 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vlv" event={"ID":"2928c877-fb1d-41fa-9324-13efccbca747","Type":"ContainerStarted","Data":"01ee5e64bda3d50c6f2d47b93cc0a701fb24527230a2b2f59cf98fa66e2e3036"} Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.646339 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4vlv" podStartSLOduration=4.245528496 podStartE2EDuration="6.646314835s" podCreationTimestamp="2026-01-29 08:45:16 +0000 UTC" firstStartedPulling="2026-01-29 08:45:19.562798598 +0000 UTC m=+400.062386550" lastFinishedPulling="2026-01-29 08:45:21.963584937 +0000 UTC m=+402.463172889" observedRunningTime="2026-01-29 08:45:22.642213948 +0000 UTC m=+403.141801910" watchObservedRunningTime="2026-01-29 08:45:22.646314835 +0000 UTC m=+403.145902787" Jan 29 08:45:22 crc kubenswrapper[5031]: I0129 08:45:22.824005 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg"] Jan 29 08:45:22 crc kubenswrapper[5031]: W0129 08:45:22.830005 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3904bd6_3c2b_4ba4_b47a_efb18448aecf.slice/crio-5114bc2e01f3c45d7e69bc35599c7b480cc6ea476354bd39239354f4b13d3168 WatchSource:0}: Error finding container 5114bc2e01f3c45d7e69bc35599c7b480cc6ea476354bd39239354f4b13d3168: Status 404 returned error can't find the container with id 5114bc2e01f3c45d7e69bc35599c7b480cc6ea476354bd39239354f4b13d3168 Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.622014 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" event={"ID":"f3904bd6-3c2b-4ba4-b47a-efb18448aecf","Type":"ContainerStarted","Data":"2361da8c2861fe22ef620af4e4c14026c70cd6afc2fe23c4b55c92bf08391ccc"} Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.623402 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.623530 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" event={"ID":"f3904bd6-3c2b-4ba4-b47a-efb18448aecf","Type":"ContainerStarted","Data":"5114bc2e01f3c45d7e69bc35599c7b480cc6ea476354bd39239354f4b13d3168"} Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.624322 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr6tb" event={"ID":"73a47626-7d91-4369-a5f0-75aba46b4f34","Type":"ContainerStarted","Data":"64a666d69bafaf980fc4a789e0a07633dd6057602233c5d955fa119310f364f1"} Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.625838 5031 generic.go:334] "Generic (PLEG): container finished" podID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerID="c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a" exitCode=0 Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.625868 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c5v8" event={"ID":"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1","Type":"ContainerDied","Data":"c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a"} Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.643055 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" podStartSLOduration=3.643035939 podStartE2EDuration="3.643035939s" podCreationTimestamp="2026-01-29 08:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:45:23.641236198 +0000 UTC m=+404.140824150" watchObservedRunningTime="2026-01-29 08:45:23.643035939 +0000 UTC m=+404.142623891" Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.651082 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-869d5c4597-48vjg" Jan 29 08:45:23 crc kubenswrapper[5031]: I0129 08:45:23.664619 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kr6tb" podStartSLOduration=2.937174079 podStartE2EDuration="6.664605091s" podCreationTimestamp="2026-01-29 08:45:17 +0000 UTC" firstStartedPulling="2026-01-29 08:45:19.562791738 +0000 UTC m=+400.062379690" lastFinishedPulling="2026-01-29 08:45:23.29022275 +0000 UTC m=+403.789810702" observedRunningTime="2026-01-29 08:45:23.66313492 +0000 UTC m=+404.162722872" watchObservedRunningTime="2026-01-29 08:45:23.664605091 +0000 UTC m=+404.164193043" Jan 29 08:45:25 crc kubenswrapper[5031]: I0129 08:45:25.283044 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:25 crc kubenswrapper[5031]: I0129 08:45:25.283736 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:25 crc kubenswrapper[5031]: I0129 08:45:25.323743 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:25 crc kubenswrapper[5031]: I0129 08:45:25.638679 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c5v8" event={"ID":"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1","Type":"ContainerStarted","Data":"a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31"} Jan 29 08:45:25 crc kubenswrapper[5031]: I0129 08:45:25.672558 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8c5v8" podStartSLOduration=4.255965268 podStartE2EDuration="6.672531042s" podCreationTimestamp="2026-01-29 08:45:19 +0000 UTC" firstStartedPulling="2026-01-29 08:45:21.597849251 +0000 UTC m=+402.097437203" lastFinishedPulling="2026-01-29 08:45:24.014415025 +0000 UTC m=+404.514002977" observedRunningTime="2026-01-29 08:45:25.65519307 +0000 UTC m=+406.154781012" watchObservedRunningTime="2026-01-29 08:45:25.672531042 +0000 UTC m=+406.172118994" Jan 29 08:45:27 crc kubenswrapper[5031]: I0129 08:45:27.064899 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:27 crc kubenswrapper[5031]: I0129 08:45:27.066237 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:27 crc kubenswrapper[5031]: I0129 08:45:27.125524 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:27 crc kubenswrapper[5031]: I0129 08:45:27.665305 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:27 crc kubenswrapper[5031]: I0129 08:45:27.665351 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:27 crc kubenswrapper[5031]: I0129 08:45:27.698019 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4vlv" Jan 29 08:45:28 crc kubenswrapper[5031]: I0129 08:45:28.703282 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kr6tb" podUID="73a47626-7d91-4369-a5f0-75aba46b4f34" containerName="registry-server" probeResult="failure" output=< Jan 29 08:45:28 crc kubenswrapper[5031]: timeout: failed to connect service ":50051" within 1s Jan 29 08:45:28 crc kubenswrapper[5031]: > Jan 29 08:45:29 crc kubenswrapper[5031]: I0129 08:45:29.463013 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:29 crc kubenswrapper[5031]: I0129 08:45:29.463060 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:29 crc kubenswrapper[5031]: I0129 08:45:29.503239 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:29 crc kubenswrapper[5031]: I0129 08:45:29.709012 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 08:45:35 crc kubenswrapper[5031]: I0129 08:45:35.322860 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cr7rh" Jan 29 08:45:37 crc kubenswrapper[5031]: I0129 08:45:37.713688 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:37 crc kubenswrapper[5031]: I0129 08:45:37.760341 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kr6tb" Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.493857 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.493909 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.493952 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.494550 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6cb656f7dd9fa337f6f10631a03e5fbb542392a52bba086f8928db8a33aaccb"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.494623 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://a6cb656f7dd9fa337f6f10631a03e5fbb542392a52bba086f8928db8a33aaccb" gracePeriod=600 Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.720869 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="a6cb656f7dd9fa337f6f10631a03e5fbb542392a52bba086f8928db8a33aaccb" exitCode=0 Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.721069 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"a6cb656f7dd9fa337f6f10631a03e5fbb542392a52bba086f8928db8a33aaccb"} Jan 29 08:45:38 crc kubenswrapper[5031]: I0129 08:45:38.721258 5031 scope.go:117] "RemoveContainer" containerID="03b4775dd6587d0e037ea8a0b2076b31fc4f35295d0ab9ecf6fa6b4d525d550a" Jan 29 08:45:39 crc kubenswrapper[5031]: I0129 08:45:39.728972 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"1c3a0de191718af0473675e2e22d56a8eed2e9db39353d5e8dce35ec5bdf4977"} Jan 29 08:47:38 crc kubenswrapper[5031]: I0129 08:47:38.493523 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:47:38 crc kubenswrapper[5031]: I0129 08:47:38.495601 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:48:08 crc kubenswrapper[5031]: I0129 08:48:08.493532 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:48:08 crc kubenswrapper[5031]: I0129 08:48:08.494163 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.493641 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.494767 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.494848 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.495776 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c3a0de191718af0473675e2e22d56a8eed2e9db39353d5e8dce35ec5bdf4977"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.495851 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://1c3a0de191718af0473675e2e22d56a8eed2e9db39353d5e8dce35ec5bdf4977" gracePeriod=600 Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.685604 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="1c3a0de191718af0473675e2e22d56a8eed2e9db39353d5e8dce35ec5bdf4977" exitCode=0 Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.685668 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"1c3a0de191718af0473675e2e22d56a8eed2e9db39353d5e8dce35ec5bdf4977"} Jan 29 08:48:38 crc kubenswrapper[5031]: I0129 08:48:38.685736 5031 scope.go:117] "RemoveContainer" containerID="a6cb656f7dd9fa337f6f10631a03e5fbb542392a52bba086f8928db8a33aaccb" Jan 29 08:48:39 crc kubenswrapper[5031]: I0129 08:48:39.697539 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"603385108d4da3e63146c528ce05dcdbfcafcb208168a4663a80e4ba28e126b1"} Jan 29 08:49:40 crc kubenswrapper[5031]: I0129 08:49:40.536047 5031 scope.go:117] "RemoveContainer" containerID="de849a2bb322015303373fe36ccd756ddc2db18205805591f3095a15b043ca6a" Jan 29 08:50:38 crc kubenswrapper[5031]: I0129 08:50:38.493512 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:50:38 crc kubenswrapper[5031]: I0129 08:50:38.494223 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:51:08 crc kubenswrapper[5031]: I0129 08:51:08.494290 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:51:08 crc kubenswrapper[5031]: I0129 08:51:08.494903 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.473462 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l47tb"] Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.474505 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.476724 5031 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4nnkj" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.476724 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.476792 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.482853 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-hfrt9"] Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.484953 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hfrt9" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.487067 5031 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6xr5l" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.495422 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ff66k"] Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.496212 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.497972 5031 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-sv5b6" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.505082 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l47tb"] Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.512621 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ff66k"] Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.515222 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hfrt9"] Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.628719 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg7g4\" (UniqueName: \"kubernetes.io/projected/18d66dd7-f94a-41fd-9d04-f09c1cea0e58-kube-api-access-cg7g4\") pod \"cert-manager-858654f9db-hfrt9\" (UID: \"18d66dd7-f94a-41fd-9d04-f09c1cea0e58\") " pod="cert-manager/cert-manager-858654f9db-hfrt9" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.628774 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcncz\" (UniqueName: \"kubernetes.io/projected/f62b13b3-ff83-4f97-a291-8067c9f5cdc9-kube-api-access-xcncz\") pod \"cert-manager-cainjector-cf98fcc89-l47tb\" (UID: \"f62b13b3-ff83-4f97-a291-8067c9f5cdc9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.628812 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq9g6\" (UniqueName: \"kubernetes.io/projected/8983adca-9e9f-4d65-9ae5-091fa81877a0-kube-api-access-qq9g6\") pod \"cert-manager-webhook-687f57d79b-ff66k\" (UID: \"8983adca-9e9f-4d65-9ae5-091fa81877a0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.730064 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg7g4\" (UniqueName: \"kubernetes.io/projected/18d66dd7-f94a-41fd-9d04-f09c1cea0e58-kube-api-access-cg7g4\") pod \"cert-manager-858654f9db-hfrt9\" (UID: \"18d66dd7-f94a-41fd-9d04-f09c1cea0e58\") " pod="cert-manager/cert-manager-858654f9db-hfrt9" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.730119 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcncz\" (UniqueName: \"kubernetes.io/projected/f62b13b3-ff83-4f97-a291-8067c9f5cdc9-kube-api-access-xcncz\") pod \"cert-manager-cainjector-cf98fcc89-l47tb\" (UID: \"f62b13b3-ff83-4f97-a291-8067c9f5cdc9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.730157 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq9g6\" (UniqueName: \"kubernetes.io/projected/8983adca-9e9f-4d65-9ae5-091fa81877a0-kube-api-access-qq9g6\") pod \"cert-manager-webhook-687f57d79b-ff66k\" (UID: \"8983adca-9e9f-4d65-9ae5-091fa81877a0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.748643 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcncz\" (UniqueName: \"kubernetes.io/projected/f62b13b3-ff83-4f97-a291-8067c9f5cdc9-kube-api-access-xcncz\") pod \"cert-manager-cainjector-cf98fcc89-l47tb\" (UID: \"f62b13b3-ff83-4f97-a291-8067c9f5cdc9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.749558 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq9g6\" (UniqueName: \"kubernetes.io/projected/8983adca-9e9f-4d65-9ae5-091fa81877a0-kube-api-access-qq9g6\") pod \"cert-manager-webhook-687f57d79b-ff66k\" (UID: \"8983adca-9e9f-4d65-9ae5-091fa81877a0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.757951 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg7g4\" (UniqueName: \"kubernetes.io/projected/18d66dd7-f94a-41fd-9d04-f09c1cea0e58-kube-api-access-cg7g4\") pod \"cert-manager-858654f9db-hfrt9\" (UID: \"18d66dd7-f94a-41fd-9d04-f09c1cea0e58\") " pod="cert-manager/cert-manager-858654f9db-hfrt9" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.789684 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.806395 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hfrt9" Jan 29 08:51:11 crc kubenswrapper[5031]: I0129 08:51:11.814875 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" Jan 29 08:51:12 crc kubenswrapper[5031]: I0129 08:51:12.225802 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l47tb"] Jan 29 08:51:12 crc kubenswrapper[5031]: I0129 08:51:12.237311 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:51:12 crc kubenswrapper[5031]: I0129 08:51:12.252881 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hfrt9"] Jan 29 08:51:12 crc kubenswrapper[5031]: I0129 08:51:12.257139 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ff66k"] Jan 29 08:51:12 crc kubenswrapper[5031]: I0129 08:51:12.964474 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hfrt9" event={"ID":"18d66dd7-f94a-41fd-9d04-f09c1cea0e58","Type":"ContainerStarted","Data":"60491b4ddc0bd4411d71ab1fc3c8f93a464ad9c2eaba79bf880ebfde114b4071"} Jan 29 08:51:12 crc kubenswrapper[5031]: I0129 08:51:12.966827 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" event={"ID":"f62b13b3-ff83-4f97-a291-8067c9f5cdc9","Type":"ContainerStarted","Data":"0fc2babc68d133f2f9a68bc0730d2f3dd2a1b692d080576e7d8c991c91ab4868"} Jan 29 08:51:12 crc kubenswrapper[5031]: I0129 08:51:12.968009 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" event={"ID":"8983adca-9e9f-4d65-9ae5-091fa81877a0","Type":"ContainerStarted","Data":"e5d746c8bc916827bf5cb12893d4bdee14cb42c62c7ca4b35a40df7dad38d645"} Jan 29 08:51:16 crc kubenswrapper[5031]: I0129 08:51:16.987716 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" event={"ID":"8983adca-9e9f-4d65-9ae5-091fa81877a0","Type":"ContainerStarted","Data":"bc636f0dd93407959ecc7f0774e3639da7a38577f8ab53945b3ad93e40e64289"} Jan 29 08:51:16 crc kubenswrapper[5031]: I0129 08:51:16.989295 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" Jan 29 08:51:16 crc kubenswrapper[5031]: I0129 08:51:16.991091 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hfrt9" event={"ID":"18d66dd7-f94a-41fd-9d04-f09c1cea0e58","Type":"ContainerStarted","Data":"ab315b783f992d5fa6aa77867290c00d3c637f63e422e0c6830473d8953df0ab"} Jan 29 08:51:17 crc kubenswrapper[5031]: I0129 08:51:17.004462 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" podStartSLOduration=2.004351036 podStartE2EDuration="6.004444272s" podCreationTimestamp="2026-01-29 08:51:11 +0000 UTC" firstStartedPulling="2026-01-29 08:51:12.258322636 +0000 UTC m=+752.757910578" lastFinishedPulling="2026-01-29 08:51:16.258415852 +0000 UTC m=+756.758003814" observedRunningTime="2026-01-29 08:51:17.002325455 +0000 UTC m=+757.501913417" watchObservedRunningTime="2026-01-29 08:51:17.004444272 +0000 UTC m=+757.504032234" Jan 29 08:51:17 crc kubenswrapper[5031]: I0129 08:51:17.025235 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-hfrt9" podStartSLOduration=2.017744311 podStartE2EDuration="6.025217296s" podCreationTimestamp="2026-01-29 08:51:11 +0000 UTC" firstStartedPulling="2026-01-29 08:51:12.255956912 +0000 UTC m=+752.755544864" lastFinishedPulling="2026-01-29 08:51:16.263429897 +0000 UTC m=+756.763017849" observedRunningTime="2026-01-29 08:51:17.020391065 +0000 UTC m=+757.519979027" watchObservedRunningTime="2026-01-29 08:51:17.025217296 +0000 UTC m=+757.524805238" Jan 29 08:51:17 crc kubenswrapper[5031]: I0129 08:51:17.998118 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" event={"ID":"f62b13b3-ff83-4f97-a291-8067c9f5cdc9","Type":"ContainerStarted","Data":"6f74eb0c5528144ed3b6b78e03c555bf259bdb214e5f2cbb09edc7c05d4a6e92"} Jan 29 08:51:18 crc kubenswrapper[5031]: I0129 08:51:18.013745 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l47tb" podStartSLOduration=1.638190307 podStartE2EDuration="7.013719522s" podCreationTimestamp="2026-01-29 08:51:11 +0000 UTC" firstStartedPulling="2026-01-29 08:51:12.237045419 +0000 UTC m=+752.736633371" lastFinishedPulling="2026-01-29 08:51:17.612574594 +0000 UTC m=+758.112162586" observedRunningTime="2026-01-29 08:51:18.009686343 +0000 UTC m=+758.509274295" watchObservedRunningTime="2026-01-29 08:51:18.013719522 +0000 UTC m=+758.513307474" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.493247 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f7pds"] Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.496892 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-controller" containerID="cri-o://0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.497008 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="nbdb" containerID="cri-o://3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.497081 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-node" containerID="cri-o://48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.497128 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-acl-logging" containerID="cri-o://9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.497154 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.497568 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="sbdb" containerID="cri-o://0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.497655 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="northd" containerID="cri-o://5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.527614 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" containerID="cri-o://c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" gracePeriod=30 Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.848561 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/3.log" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.851210 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovn-acl-logging/0.log" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.851668 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovn-controller/0.log" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.852021 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903624 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-422wz"] Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903873 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="nbdb" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903892 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="nbdb" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903905 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-acl-logging" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903913 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-acl-logging" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903922 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903928 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903935 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903941 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903946 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-node" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903952 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-node" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903961 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="northd" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903967 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="northd" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903974 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="sbdb" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903979 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="sbdb" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.903992 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.903998 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.904007 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904014 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.904024 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kubecfg-setup" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904034 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kubecfg-setup" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.904044 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904053 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.904065 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904073 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904183 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-node" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904196 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-acl-logging" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904208 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="sbdb" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904217 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904226 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904237 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904247 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovn-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904257 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904265 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="northd" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904273 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904280 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="nbdb" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904290 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: E0129 08:51:19.904467 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.904479 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerName="ovnkube-controller" Jan 29 08:51:19 crc kubenswrapper[5031]: I0129 08:51:19.906473 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.010994 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovnkube-controller/3.log" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.013245 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovn-acl-logging/0.log" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.013778 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f7pds_2afca9b4-a79c-40db-8c5f-0369e09228b9/ovn-controller/0.log" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014200 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" exitCode=0 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014228 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" exitCode=0 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014240 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" exitCode=0 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014251 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" exitCode=0 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014260 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" exitCode=0 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014269 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" exitCode=0 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014280 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" exitCode=143 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014290 5031 generic.go:334] "Generic (PLEG): container finished" podID="2afca9b4-a79c-40db-8c5f-0369e09228b9" containerID="0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" exitCode=143 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014313 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014342 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014406 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014421 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014433 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014442 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014451 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014461 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014472 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014478 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014484 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014489 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014496 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014502 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014507 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014512 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014519 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014527 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014533 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014539 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014545 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014551 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014556 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014487 5031 scope.go:117] "RemoveContainer" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014561 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014655 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014667 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014676 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014688 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014700 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014709 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014714 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014721 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014727 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014732 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014739 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014745 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014750 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014756 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014763 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7pds" event={"ID":"2afca9b4-a79c-40db-8c5f-0369e09228b9","Type":"ContainerDied","Data":"993f199bf4789aa7315079a22fa9cc8f3fbd728cf19e1e1e20d6ff3b743c5d6d"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014773 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014779 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014784 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014790 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014795 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014801 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014805 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014812 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014817 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.014822 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.016237 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/2.log" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.016730 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/1.log" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.016771 5031 generic.go:334] "Generic (PLEG): container finished" podID="e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad" containerID="36a3e18c8bf74378ac5216bc97095f9be8985c97e82e42362c7bcc0b1857c92e" exitCode=2 Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.016798 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerDied","Data":"36a3e18c8bf74378ac5216bc97095f9be8985c97e82e42362c7bcc0b1857c92e"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.016824 5031 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6"} Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.017297 5031 scope.go:117] "RemoveContainer" containerID="36a3e18c8bf74378ac5216bc97095f9be8985c97e82e42362c7bcc0b1857c92e" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.037157 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041013 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-systemd-units\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041066 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-netns\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041092 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-log-socket\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041132 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovn-node-metrics-cert\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041156 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-bin\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041173 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-ovn-kubernetes\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041223 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-env-overrides\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041235 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-kubelet\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041232 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041254 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041253 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-etc-openvswitch\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041289 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041319 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041331 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-node-log\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041341 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-log-socket" (OuterVolumeSpecName: "log-socket") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041381 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041395 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041416 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-systemd\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041434 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-node-log" (OuterVolumeSpecName: "node-log") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041438 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-openvswitch\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041462 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-netd\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041465 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041491 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-config\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041507 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-var-lib-openvswitch\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041508 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041523 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-slash\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041540 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041547 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-script-lib\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041608 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sl9d\" (UniqueName: \"kubernetes.io/projected/2afca9b4-a79c-40db-8c5f-0369e09228b9-kube-api-access-9sl9d\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041653 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-ovn\") pod \"2afca9b4-a79c-40db-8c5f-0369e09228b9\" (UID: \"2afca9b4-a79c-40db-8c5f-0369e09228b9\") " Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041772 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041842 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041851 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-ovn\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041917 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041950 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-log-socket\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041988 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042013 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/647c9007-54b2-4cb6-bbff-8e35c1893782-ovn-node-metrics-cert\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042052 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-run-ovn-kubernetes\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042083 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-cni-netd\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041982 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042121 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrxc\" (UniqueName: \"kubernetes.io/projected/647c9007-54b2-4cb6-bbff-8e35c1893782-kube-api-access-9jrxc\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042170 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042193 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-slash" (OuterVolumeSpecName: "host-slash") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042169 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.041195 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042321 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-etc-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042404 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-kubelet\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042457 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-slash\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042500 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-var-lib-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042546 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-systemd\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042571 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-env-overrides\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042598 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-systemd-units\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042620 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-cni-bin\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042645 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-node-log\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042667 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-ovnkube-config\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042694 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-ovnkube-script-lib\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042720 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-run-netns\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042787 5031 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042803 5031 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042817 5031 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042828 5031 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042839 5031 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042849 5031 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042862 5031 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042876 5031 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042948 5031 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042961 5031 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042973 5031 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.042985 5031 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.043087 5031 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.043127 5031 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.043146 5031 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.043186 5031 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.043199 5031 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.047827 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.051732 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2afca9b4-a79c-40db-8c5f-0369e09228b9-kube-api-access-9sl9d" (OuterVolumeSpecName: "kube-api-access-9sl9d") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "kube-api-access-9sl9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.057188 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2afca9b4-a79c-40db-8c5f-0369e09228b9" (UID: "2afca9b4-a79c-40db-8c5f-0369e09228b9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.081223 5031 scope.go:117] "RemoveContainer" containerID="0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.105882 5031 scope.go:117] "RemoveContainer" containerID="3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.118508 5031 scope.go:117] "RemoveContainer" containerID="5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.133196 5031 scope.go:117] "RemoveContainer" containerID="bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.143850 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-slash\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.143904 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-var-lib-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.143930 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-systemd\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.143951 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-env-overrides\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.143973 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-systemd-units\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144014 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-cni-bin\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144036 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-node-log\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144057 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-ovnkube-config\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144079 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-ovnkube-script-lib\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144125 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-run-netns\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144159 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-ovn\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144244 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144268 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-log-socket\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144295 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144316 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/647c9007-54b2-4cb6-bbff-8e35c1893782-ovn-node-metrics-cert\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144343 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-run-ovn-kubernetes\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144381 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-cni-netd\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144423 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jrxc\" (UniqueName: \"kubernetes.io/projected/647c9007-54b2-4cb6-bbff-8e35c1893782-kube-api-access-9jrxc\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144461 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-etc-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144481 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-run-netns\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144492 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144487 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-kubelet\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144538 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-ovn\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144567 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-systemd-units\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144577 5031 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2afca9b4-a79c-40db-8c5f-0369e09228b9-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144603 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-cni-bin\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144616 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sl9d\" (UniqueName: \"kubernetes.io/projected/2afca9b4-a79c-40db-8c5f-0369e09228b9-kube-api-access-9sl9d\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144632 5031 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2afca9b4-a79c-40db-8c5f-0369e09228b9-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144634 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-node-log\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144660 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-slash\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144689 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-var-lib-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144715 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-systemd\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144774 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-run-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.144831 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-log-socket\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145210 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-env-overrides\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145286 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-ovnkube-config\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145430 5031 scope.go:117] "RemoveContainer" containerID="48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145468 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-cni-netd\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145507 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-run-ovn-kubernetes\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145513 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-etc-openvswitch\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145521 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/647c9007-54b2-4cb6-bbff-8e35c1893782-host-kubelet\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.145435 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/647c9007-54b2-4cb6-bbff-8e35c1893782-ovnkube-script-lib\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.149537 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/647c9007-54b2-4cb6-bbff-8e35c1893782-ovn-node-metrics-cert\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.161800 5031 scope.go:117] "RemoveContainer" containerID="9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.163161 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jrxc\" (UniqueName: \"kubernetes.io/projected/647c9007-54b2-4cb6-bbff-8e35c1893782-kube-api-access-9jrxc\") pod \"ovnkube-node-422wz\" (UID: \"647c9007-54b2-4cb6-bbff-8e35c1893782\") " pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.178499 5031 scope.go:117] "RemoveContainer" containerID="0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.192543 5031 scope.go:117] "RemoveContainer" containerID="54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.204341 5031 scope.go:117] "RemoveContainer" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.204818 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": container with ID starting with c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757 not found: ID does not exist" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.204858 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} err="failed to get container status \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": rpc error: code = NotFound desc = could not find container \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": container with ID starting with c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.204883 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.205170 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": container with ID starting with bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06 not found: ID does not exist" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.205249 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} err="failed to get container status \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": rpc error: code = NotFound desc = could not find container \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": container with ID starting with bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.205308 5031 scope.go:117] "RemoveContainer" containerID="0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.205552 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": container with ID starting with 0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a not found: ID does not exist" containerID="0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.205621 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} err="failed to get container status \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": rpc error: code = NotFound desc = could not find container \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": container with ID starting with 0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.205678 5031 scope.go:117] "RemoveContainer" containerID="3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.206026 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": container with ID starting with 3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678 not found: ID does not exist" containerID="3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.206098 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} err="failed to get container status \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": rpc error: code = NotFound desc = could not find container \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": container with ID starting with 3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.206164 5031 scope.go:117] "RemoveContainer" containerID="5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.206675 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": container with ID starting with 5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76 not found: ID does not exist" containerID="5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.206694 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} err="failed to get container status \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": rpc error: code = NotFound desc = could not find container \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": container with ID starting with 5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.206706 5031 scope.go:117] "RemoveContainer" containerID="bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.206906 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": container with ID starting with bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b not found: ID does not exist" containerID="bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.206982 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} err="failed to get container status \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": rpc error: code = NotFound desc = could not find container \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": container with ID starting with bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.207044 5031 scope.go:117] "RemoveContainer" containerID="48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.207308 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": container with ID starting with 48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8 not found: ID does not exist" containerID="48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.207328 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} err="failed to get container status \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": rpc error: code = NotFound desc = could not find container \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": container with ID starting with 48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.207346 5031 scope.go:117] "RemoveContainer" containerID="9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.207562 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": container with ID starting with 9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b not found: ID does not exist" containerID="9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.207634 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} err="failed to get container status \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": rpc error: code = NotFound desc = could not find container \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": container with ID starting with 9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.207697 5031 scope.go:117] "RemoveContainer" containerID="0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.207940 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": container with ID starting with 0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b not found: ID does not exist" containerID="0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.208026 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} err="failed to get container status \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": rpc error: code = NotFound desc = could not find container \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": container with ID starting with 0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.208096 5031 scope.go:117] "RemoveContainer" containerID="54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0" Jan 29 08:51:20 crc kubenswrapper[5031]: E0129 08:51:20.208406 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": container with ID starting with 54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0 not found: ID does not exist" containerID="54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.208492 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} err="failed to get container status \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": rpc error: code = NotFound desc = could not find container \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": container with ID starting with 54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.208564 5031 scope.go:117] "RemoveContainer" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.208896 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} err="failed to get container status \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": rpc error: code = NotFound desc = could not find container \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": container with ID starting with c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.208972 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.209261 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} err="failed to get container status \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": rpc error: code = NotFound desc = could not find container \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": container with ID starting with bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.209351 5031 scope.go:117] "RemoveContainer" containerID="0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.209764 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} err="failed to get container status \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": rpc error: code = NotFound desc = could not find container \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": container with ID starting with 0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.209854 5031 scope.go:117] "RemoveContainer" containerID="3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.210149 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} err="failed to get container status \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": rpc error: code = NotFound desc = could not find container \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": container with ID starting with 3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.210239 5031 scope.go:117] "RemoveContainer" containerID="5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.210514 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} err="failed to get container status \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": rpc error: code = NotFound desc = could not find container \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": container with ID starting with 5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.210532 5031 scope.go:117] "RemoveContainer" containerID="bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.210739 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} err="failed to get container status \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": rpc error: code = NotFound desc = could not find container \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": container with ID starting with bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.210812 5031 scope.go:117] "RemoveContainer" containerID="48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.211036 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} err="failed to get container status \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": rpc error: code = NotFound desc = could not find container \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": container with ID starting with 48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.211188 5031 scope.go:117] "RemoveContainer" containerID="9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.212347 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} err="failed to get container status \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": rpc error: code = NotFound desc = could not find container \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": container with ID starting with 9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.212508 5031 scope.go:117] "RemoveContainer" containerID="0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.212833 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} err="failed to get container status \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": rpc error: code = NotFound desc = could not find container \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": container with ID starting with 0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.212866 5031 scope.go:117] "RemoveContainer" containerID="54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.213268 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} err="failed to get container status \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": rpc error: code = NotFound desc = could not find container \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": container with ID starting with 54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.213377 5031 scope.go:117] "RemoveContainer" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.213783 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} err="failed to get container status \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": rpc error: code = NotFound desc = could not find container \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": container with ID starting with c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.213823 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.214132 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} err="failed to get container status \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": rpc error: code = NotFound desc = could not find container \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": container with ID starting with bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.214220 5031 scope.go:117] "RemoveContainer" containerID="0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.214520 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} err="failed to get container status \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": rpc error: code = NotFound desc = could not find container \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": container with ID starting with 0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.214541 5031 scope.go:117] "RemoveContainer" containerID="3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.214754 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} err="failed to get container status \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": rpc error: code = NotFound desc = could not find container \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": container with ID starting with 3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.214775 5031 scope.go:117] "RemoveContainer" containerID="5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.214975 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} err="failed to get container status \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": rpc error: code = NotFound desc = could not find container \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": container with ID starting with 5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.215061 5031 scope.go:117] "RemoveContainer" containerID="bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.215342 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} err="failed to get container status \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": rpc error: code = NotFound desc = could not find container \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": container with ID starting with bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.215444 5031 scope.go:117] "RemoveContainer" containerID="48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.215688 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} err="failed to get container status \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": rpc error: code = NotFound desc = could not find container \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": container with ID starting with 48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.215766 5031 scope.go:117] "RemoveContainer" containerID="9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.216027 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} err="failed to get container status \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": rpc error: code = NotFound desc = could not find container \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": container with ID starting with 9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.216100 5031 scope.go:117] "RemoveContainer" containerID="0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.216377 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} err="failed to get container status \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": rpc error: code = NotFound desc = could not find container \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": container with ID starting with 0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.216464 5031 scope.go:117] "RemoveContainer" containerID="54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.216707 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} err="failed to get container status \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": rpc error: code = NotFound desc = could not find container \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": container with ID starting with 54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.216798 5031 scope.go:117] "RemoveContainer" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.217078 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} err="failed to get container status \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": rpc error: code = NotFound desc = could not find container \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": container with ID starting with c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.217175 5031 scope.go:117] "RemoveContainer" containerID="bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.217490 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06"} err="failed to get container status \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": rpc error: code = NotFound desc = could not find container \"bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06\": container with ID starting with bbd2619ec30cf65d386679f5cea029a1de1fe262a5840fd896d2716fa71d8e06 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.217584 5031 scope.go:117] "RemoveContainer" containerID="0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.217955 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a"} err="failed to get container status \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": rpc error: code = NotFound desc = could not find container \"0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a\": container with ID starting with 0ffe7bd970c279f378e94ed26c6647e5d1ef02135cd6cbb86ff85d1ebce9dc4a not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.217975 5031 scope.go:117] "RemoveContainer" containerID="3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.218233 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678"} err="failed to get container status \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": rpc error: code = NotFound desc = could not find container \"3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678\": container with ID starting with 3d4596f24036c5e8de06777be2e6af07e35e943e01dd82b543d8e8f4bf93a678 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.218270 5031 scope.go:117] "RemoveContainer" containerID="5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.218495 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76"} err="failed to get container status \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": rpc error: code = NotFound desc = could not find container \"5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76\": container with ID starting with 5de9c67efaabaee959bd049b3314aff44b52d3cb6bd9b5c7e247af9d6e6f5c76 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.218588 5031 scope.go:117] "RemoveContainer" containerID="bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.218850 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.218882 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b"} err="failed to get container status \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": rpc error: code = NotFound desc = could not find container \"bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b\": container with ID starting with bc4449d291f5c9a0d7ad32b49eba220a2975cb0bda30eb680e604d79aa59a23b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.219016 5031 scope.go:117] "RemoveContainer" containerID="48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.219212 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8"} err="failed to get container status \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": rpc error: code = NotFound desc = could not find container \"48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8\": container with ID starting with 48dd7e0e2894bba7935d64c416a3c7d93a83fc10cb26593f22f16e66f9479bd8 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.219233 5031 scope.go:117] "RemoveContainer" containerID="9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.219545 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b"} err="failed to get container status \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": rpc error: code = NotFound desc = could not find container \"9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b\": container with ID starting with 9b11d846b59c0a4fcf41b4c8d2bae1718237aa2fb94d51aacee9da41c39f5c0b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.219561 5031 scope.go:117] "RemoveContainer" containerID="0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.219783 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b"} err="failed to get container status \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": rpc error: code = NotFound desc = could not find container \"0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b\": container with ID starting with 0997e65283e61764c30024f281639e254ca1057317f6a67c918cb672178e376b not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.219806 5031 scope.go:117] "RemoveContainer" containerID="54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.220025 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0"} err="failed to get container status \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": rpc error: code = NotFound desc = could not find container \"54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0\": container with ID starting with 54dd3093c3be251b6a8ae73669a614dffa7921669b72e70354fbae7f179f0eb0 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.220046 5031 scope.go:117] "RemoveContainer" containerID="c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.221204 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757"} err="failed to get container status \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": rpc error: code = NotFound desc = could not find container \"c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757\": container with ID starting with c66bf1cd1aaa5a45f3a1c0b6f7b5f17bdbcae8a02af9814fa6a47147d80ff757 not found: ID does not exist" Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.340665 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f7pds"] Jan 29 08:51:20 crc kubenswrapper[5031]: I0129 08:51:20.344124 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f7pds"] Jan 29 08:51:21 crc kubenswrapper[5031]: I0129 08:51:21.030066 5031 generic.go:334] "Generic (PLEG): container finished" podID="647c9007-54b2-4cb6-bbff-8e35c1893782" containerID="6983cc07e161bac3b09b01c4427306b8ac04e68749a24dc8bec16da46ed32b83" exitCode=0 Jan 29 08:51:21 crc kubenswrapper[5031]: I0129 08:51:21.030219 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerDied","Data":"6983cc07e161bac3b09b01c4427306b8ac04e68749a24dc8bec16da46ed32b83"} Jan 29 08:51:21 crc kubenswrapper[5031]: I0129 08:51:21.031707 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"89ac6804c39d2b0647f00dd2205f1f7bcad93fb4c471319b6497344c181a63c1"} Jan 29 08:51:21 crc kubenswrapper[5031]: I0129 08:51:21.041574 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/2.log" Jan 29 08:51:21 crc kubenswrapper[5031]: I0129 08:51:21.042070 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/1.log" Jan 29 08:51:21 crc kubenswrapper[5031]: I0129 08:51:21.042146 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ghc5v" event={"ID":"e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad","Type":"ContainerStarted","Data":"b49818caafa23ca0987506e2734e20cc0b5b0b95da486c9afd92659a351da28f"} Jan 29 08:51:21 crc kubenswrapper[5031]: I0129 08:51:21.817796 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-ff66k" Jan 29 08:51:22 crc kubenswrapper[5031]: I0129 08:51:22.049728 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"8a622532b77c5ab1be4c6b731179e43a2d88acf9bda33052f7af3f01f2938fb0"} Jan 29 08:51:22 crc kubenswrapper[5031]: I0129 08:51:22.049765 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"a7930f0dbbb9aac8749ce18695647d8178aa7bfa7a0717e9840a7c95df468b23"} Jan 29 08:51:22 crc kubenswrapper[5031]: I0129 08:51:22.049776 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"721f8c052f2a7af091c1c62fa48cc286da66b6b1bcd0dd1a7992bb0d9e898db0"} Jan 29 08:51:22 crc kubenswrapper[5031]: I0129 08:51:22.049786 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"7b7471ce91efefbe7ad215e3d04c7de5d62667197b7394540fcc3049270b96f1"} Jan 29 08:51:22 crc kubenswrapper[5031]: I0129 08:51:22.049795 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"ced92d2fa785bfcfcc4c81e49d102b4cb48d841cb7cac3fdcfd05d0a1f797665"} Jan 29 08:51:22 crc kubenswrapper[5031]: I0129 08:51:22.049806 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"ead1e1b9b87ff7190a2198c3dbcc10927aa316073589ffc23f152ed7511cab39"} Jan 29 08:51:22 crc kubenswrapper[5031]: I0129 08:51:22.291050 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2afca9b4-a79c-40db-8c5f-0369e09228b9" path="/var/lib/kubelet/pods/2afca9b4-a79c-40db-8c5f-0369e09228b9/volumes" Jan 29 08:51:24 crc kubenswrapper[5031]: I0129 08:51:24.066837 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"1b653ac3f7576eb6c04f3bb58daae6a3b62de202b7ba434dff5c88e90d34c9e7"} Jan 29 08:51:24 crc kubenswrapper[5031]: I0129 08:51:24.377292 5031 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 08:51:27 crc kubenswrapper[5031]: I0129 08:51:27.106275 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" event={"ID":"647c9007-54b2-4cb6-bbff-8e35c1893782","Type":"ContainerStarted","Data":"bf1f2083da27d092ad83abbb059be3075a6a26f4c085d44f1527b40b4ae9774d"} Jan 29 08:51:27 crc kubenswrapper[5031]: I0129 08:51:27.106944 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:27 crc kubenswrapper[5031]: I0129 08:51:27.106958 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:27 crc kubenswrapper[5031]: I0129 08:51:27.138806 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" podStartSLOduration=8.138790309000001 podStartE2EDuration="8.138790309s" podCreationTimestamp="2026-01-29 08:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:51:27.13550283 +0000 UTC m=+767.635090802" watchObservedRunningTime="2026-01-29 08:51:27.138790309 +0000 UTC m=+767.638378261" Jan 29 08:51:27 crc kubenswrapper[5031]: I0129 08:51:27.140876 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:28 crc kubenswrapper[5031]: I0129 08:51:28.111313 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:28 crc kubenswrapper[5031]: I0129 08:51:28.140470 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:51:38 crc kubenswrapper[5031]: I0129 08:51:38.493398 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:51:38 crc kubenswrapper[5031]: I0129 08:51:38.493860 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:51:38 crc kubenswrapper[5031]: I0129 08:51:38.493908 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:51:38 crc kubenswrapper[5031]: I0129 08:51:38.494529 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"603385108d4da3e63146c528ce05dcdbfcafcb208168a4663a80e4ba28e126b1"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:51:38 crc kubenswrapper[5031]: I0129 08:51:38.494593 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://603385108d4da3e63146c528ce05dcdbfcafcb208168a4663a80e4ba28e126b1" gracePeriod=600 Jan 29 08:51:38 crc kubenswrapper[5031]: E0129 08:51:38.555507 5031 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod458f6239_f61f_4283_b420_460b3fe9cf09.slice/crio-603385108d4da3e63146c528ce05dcdbfcafcb208168a4663a80e4ba28e126b1.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:51:39 crc kubenswrapper[5031]: I0129 08:51:39.168188 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="603385108d4da3e63146c528ce05dcdbfcafcb208168a4663a80e4ba28e126b1" exitCode=0 Jan 29 08:51:39 crc kubenswrapper[5031]: I0129 08:51:39.168278 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"603385108d4da3e63146c528ce05dcdbfcafcb208168a4663a80e4ba28e126b1"} Jan 29 08:51:39 crc kubenswrapper[5031]: I0129 08:51:39.168820 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"16b92f6fdefb0958d7a7c20f1e33caf653c7a4682955f7b154681a53ac8f22bb"} Jan 29 08:51:39 crc kubenswrapper[5031]: I0129 08:51:39.168844 5031 scope.go:117] "RemoveContainer" containerID="1c3a0de191718af0473675e2e22d56a8eed2e9db39353d5e8dce35ec5bdf4977" Jan 29 08:51:40 crc kubenswrapper[5031]: I0129 08:51:40.587471 5031 scope.go:117] "RemoveContainer" containerID="d7cd72ce50ad8afdc788316e98a76b5bd60d010fa855596c3636bfa6e546ecd6" Jan 29 08:51:41 crc kubenswrapper[5031]: I0129 08:51:41.184395 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ghc5v_e728eb2d-5b24-46b2-99f8-2cc7fe1e3aad/kube-multus/2.log" Jan 29 08:51:50 crc kubenswrapper[5031]: I0129 08:51:50.248627 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-422wz" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.440573 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665"] Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.442965 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.444783 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.449274 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665"] Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.603669 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk2t7\" (UniqueName: \"kubernetes.io/projected/d15df353-3a05-45aa-8c9f-ba06ba2595d5-kube-api-access-wk2t7\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.603714 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.603779 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.704836 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.705292 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk2t7\" (UniqueName: \"kubernetes.io/projected/d15df353-3a05-45aa-8c9f-ba06ba2595d5-kube-api-access-wk2t7\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.705319 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.705695 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.706229 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.723785 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk2t7\" (UniqueName: \"kubernetes.io/projected/d15df353-3a05-45aa-8c9f-ba06ba2595d5-kube-api-access-wk2t7\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.760333 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:03 crc kubenswrapper[5031]: I0129 08:52:03.958085 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665"] Jan 29 08:52:04 crc kubenswrapper[5031]: I0129 08:52:04.311464 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" event={"ID":"d15df353-3a05-45aa-8c9f-ba06ba2595d5","Type":"ContainerStarted","Data":"abcbbe43b482a6eca345929e21882105e101829137dbb0b3c59b2899ff1a9173"} Jan 29 08:52:04 crc kubenswrapper[5031]: I0129 08:52:04.311512 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" event={"ID":"d15df353-3a05-45aa-8c9f-ba06ba2595d5","Type":"ContainerStarted","Data":"4ed70117b3f0fffb17519e6bb2ef6f14c08a9e9cb5e0e14bb07905b0425d70b8"} Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.317910 5031 generic.go:334] "Generic (PLEG): container finished" podID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerID="abcbbe43b482a6eca345929e21882105e101829137dbb0b3c59b2899ff1a9173" exitCode=0 Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.317971 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" event={"ID":"d15df353-3a05-45aa-8c9f-ba06ba2595d5","Type":"ContainerDied","Data":"abcbbe43b482a6eca345929e21882105e101829137dbb0b3c59b2899ff1a9173"} Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.612334 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zhgnh"] Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.614156 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.621108 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhgnh"] Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.627010 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-utilities\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.627107 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-catalog-content\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.627155 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-kube-api-access-bxv5t\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.727860 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-catalog-content\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.727922 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-kube-api-access-bxv5t\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.727972 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-utilities\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.728629 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-catalog-content\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.728839 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-utilities\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.750071 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-kube-api-access-bxv5t\") pod \"redhat-operators-zhgnh\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:05 crc kubenswrapper[5031]: I0129 08:52:05.929400 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:06 crc kubenswrapper[5031]: I0129 08:52:06.114831 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhgnh"] Jan 29 08:52:06 crc kubenswrapper[5031]: W0129 08:52:06.123180 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaad60a3f_3cc5_40a9_ac0a_c2e589939bca.slice/crio-e4c3b5c461e9ce0819a71e963233fa0f0f0e53eff81b9997c41b89b6bdf74514 WatchSource:0}: Error finding container e4c3b5c461e9ce0819a71e963233fa0f0f0e53eff81b9997c41b89b6bdf74514: Status 404 returned error can't find the container with id e4c3b5c461e9ce0819a71e963233fa0f0f0e53eff81b9997c41b89b6bdf74514 Jan 29 08:52:06 crc kubenswrapper[5031]: I0129 08:52:06.325602 5031 generic.go:334] "Generic (PLEG): container finished" podID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerID="bfb1d15e36a0bfd1fb26e15911dd4d9a339370bd044c93f1b75f775df68ed8ff" exitCode=0 Jan 29 08:52:06 crc kubenswrapper[5031]: I0129 08:52:06.325665 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhgnh" event={"ID":"aad60a3f-3cc5-40a9-ac0a-c2e589939bca","Type":"ContainerDied","Data":"bfb1d15e36a0bfd1fb26e15911dd4d9a339370bd044c93f1b75f775df68ed8ff"} Jan 29 08:52:06 crc kubenswrapper[5031]: I0129 08:52:06.326569 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhgnh" event={"ID":"aad60a3f-3cc5-40a9-ac0a-c2e589939bca","Type":"ContainerStarted","Data":"e4c3b5c461e9ce0819a71e963233fa0f0f0e53eff81b9997c41b89b6bdf74514"} Jan 29 08:52:07 crc kubenswrapper[5031]: I0129 08:52:07.333701 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhgnh" event={"ID":"aad60a3f-3cc5-40a9-ac0a-c2e589939bca","Type":"ContainerStarted","Data":"4ae1cffd2f8f8b80769741b863d09ac01bd147d4c507b2808e3bc0b4eabac403"} Jan 29 08:52:07 crc kubenswrapper[5031]: I0129 08:52:07.335688 5031 generic.go:334] "Generic (PLEG): container finished" podID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerID="8d0988bff70ff03c1af946bc571ec3d1bdaf569098e3356083beadcf9afb8a2e" exitCode=0 Jan 29 08:52:07 crc kubenswrapper[5031]: I0129 08:52:07.335748 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" event={"ID":"d15df353-3a05-45aa-8c9f-ba06ba2595d5","Type":"ContainerDied","Data":"8d0988bff70ff03c1af946bc571ec3d1bdaf569098e3356083beadcf9afb8a2e"} Jan 29 08:52:08 crc kubenswrapper[5031]: I0129 08:52:08.343866 5031 generic.go:334] "Generic (PLEG): container finished" podID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerID="f58dae82d1405fd4206c1851576fd96b42075fc8002ab9595c1fe4794c55e277" exitCode=0 Jan 29 08:52:08 crc kubenswrapper[5031]: I0129 08:52:08.343962 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" event={"ID":"d15df353-3a05-45aa-8c9f-ba06ba2595d5","Type":"ContainerDied","Data":"f58dae82d1405fd4206c1851576fd96b42075fc8002ab9595c1fe4794c55e277"} Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.695623 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.785234 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk2t7\" (UniqueName: \"kubernetes.io/projected/d15df353-3a05-45aa-8c9f-ba06ba2595d5-kube-api-access-wk2t7\") pod \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.785292 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-bundle\") pod \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.785345 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-util\") pod \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\" (UID: \"d15df353-3a05-45aa-8c9f-ba06ba2595d5\") " Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.786483 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-bundle" (OuterVolumeSpecName: "bundle") pod "d15df353-3a05-45aa-8c9f-ba06ba2595d5" (UID: "d15df353-3a05-45aa-8c9f-ba06ba2595d5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.795213 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15df353-3a05-45aa-8c9f-ba06ba2595d5-kube-api-access-wk2t7" (OuterVolumeSpecName: "kube-api-access-wk2t7") pod "d15df353-3a05-45aa-8c9f-ba06ba2595d5" (UID: "d15df353-3a05-45aa-8c9f-ba06ba2595d5"). InnerVolumeSpecName "kube-api-access-wk2t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.802082 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-util" (OuterVolumeSpecName: "util") pod "d15df353-3a05-45aa-8c9f-ba06ba2595d5" (UID: "d15df353-3a05-45aa-8c9f-ba06ba2595d5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.886753 5031 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.886790 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk2t7\" (UniqueName: \"kubernetes.io/projected/d15df353-3a05-45aa-8c9f-ba06ba2595d5-kube-api-access-wk2t7\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:09 crc kubenswrapper[5031]: I0129 08:52:09.886800 5031 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15df353-3a05-45aa-8c9f-ba06ba2595d5-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:10 crc kubenswrapper[5031]: I0129 08:52:10.356476 5031 generic.go:334] "Generic (PLEG): container finished" podID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerID="4ae1cffd2f8f8b80769741b863d09ac01bd147d4c507b2808e3bc0b4eabac403" exitCode=0 Jan 29 08:52:10 crc kubenswrapper[5031]: I0129 08:52:10.356824 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhgnh" event={"ID":"aad60a3f-3cc5-40a9-ac0a-c2e589939bca","Type":"ContainerDied","Data":"4ae1cffd2f8f8b80769741b863d09ac01bd147d4c507b2808e3bc0b4eabac403"} Jan 29 08:52:10 crc kubenswrapper[5031]: I0129 08:52:10.360161 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" event={"ID":"d15df353-3a05-45aa-8c9f-ba06ba2595d5","Type":"ContainerDied","Data":"4ed70117b3f0fffb17519e6bb2ef6f14c08a9e9cb5e0e14bb07905b0425d70b8"} Jan 29 08:52:10 crc kubenswrapper[5031]: I0129 08:52:10.360532 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed70117b3f0fffb17519e6bb2ef6f14c08a9e9cb5e0e14bb07905b0425d70b8" Jan 29 08:52:10 crc kubenswrapper[5031]: I0129 08:52:10.360267 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665" Jan 29 08:52:11 crc kubenswrapper[5031]: I0129 08:52:11.369441 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhgnh" event={"ID":"aad60a3f-3cc5-40a9-ac0a-c2e589939bca","Type":"ContainerStarted","Data":"1ae52040426139c17d3f9964e2b3bac209f02bb1497820ec2d87e655d1270c01"} Jan 29 08:52:11 crc kubenswrapper[5031]: I0129 08:52:11.389686 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zhgnh" podStartSLOduration=2.011456491 podStartE2EDuration="6.389664783s" podCreationTimestamp="2026-01-29 08:52:05 +0000 UTC" firstStartedPulling="2026-01-29 08:52:06.364237018 +0000 UTC m=+806.863824970" lastFinishedPulling="2026-01-29 08:52:10.7424453 +0000 UTC m=+811.242033262" observedRunningTime="2026-01-29 08:52:11.385536071 +0000 UTC m=+811.885124043" watchObservedRunningTime="2026-01-29 08:52:11.389664783 +0000 UTC m=+811.889252735" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.901199 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-vdbl7"] Jan 29 08:52:13 crc kubenswrapper[5031]: E0129 08:52:13.901680 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerName="extract" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.901693 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerName="extract" Jan 29 08:52:13 crc kubenswrapper[5031]: E0129 08:52:13.901712 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerName="pull" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.901718 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerName="pull" Jan 29 08:52:13 crc kubenswrapper[5031]: E0129 08:52:13.901727 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerName="util" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.901734 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerName="util" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.901836 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15df353-3a05-45aa-8c9f-ba06ba2595d5" containerName="extract" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.902196 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.904054 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-k57fm" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.904403 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.904818 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 08:52:13 crc kubenswrapper[5031]: I0129 08:52:13.914330 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-vdbl7"] Jan 29 08:52:14 crc kubenswrapper[5031]: I0129 08:52:14.036821 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4sx\" (UniqueName: \"kubernetes.io/projected/1e390d20-964f-4337-a396-d56cf85b5a4d-kube-api-access-cr4sx\") pod \"nmstate-operator-646758c888-vdbl7\" (UID: \"1e390d20-964f-4337-a396-d56cf85b5a4d\") " pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" Jan 29 08:52:14 crc kubenswrapper[5031]: I0129 08:52:14.137722 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4sx\" (UniqueName: \"kubernetes.io/projected/1e390d20-964f-4337-a396-d56cf85b5a4d-kube-api-access-cr4sx\") pod \"nmstate-operator-646758c888-vdbl7\" (UID: \"1e390d20-964f-4337-a396-d56cf85b5a4d\") " pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" Jan 29 08:52:14 crc kubenswrapper[5031]: I0129 08:52:14.158877 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4sx\" (UniqueName: \"kubernetes.io/projected/1e390d20-964f-4337-a396-d56cf85b5a4d-kube-api-access-cr4sx\") pod \"nmstate-operator-646758c888-vdbl7\" (UID: \"1e390d20-964f-4337-a396-d56cf85b5a4d\") " pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" Jan 29 08:52:14 crc kubenswrapper[5031]: I0129 08:52:14.216794 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" Jan 29 08:52:14 crc kubenswrapper[5031]: I0129 08:52:14.493740 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-vdbl7"] Jan 29 08:52:15 crc kubenswrapper[5031]: I0129 08:52:15.393303 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" event={"ID":"1e390d20-964f-4337-a396-d56cf85b5a4d","Type":"ContainerStarted","Data":"11a7b7ab4acde798fd2e5abc9f17a575049fb3d0c2cfb81f0fefc40d1a6a902a"} Jan 29 08:52:15 crc kubenswrapper[5031]: I0129 08:52:15.929576 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:15 crc kubenswrapper[5031]: I0129 08:52:15.929611 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:16 crc kubenswrapper[5031]: I0129 08:52:16.967073 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zhgnh" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="registry-server" probeResult="failure" output=< Jan 29 08:52:16 crc kubenswrapper[5031]: timeout: failed to connect service ":50051" within 1s Jan 29 08:52:16 crc kubenswrapper[5031]: > Jan 29 08:52:19 crc kubenswrapper[5031]: I0129 08:52:19.413997 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" event={"ID":"1e390d20-964f-4337-a396-d56cf85b5a4d","Type":"ContainerStarted","Data":"bb7e815cf7766e164ca5a287ac98d015101b952f956a2b88886574d731e582fc"} Jan 29 08:52:19 crc kubenswrapper[5031]: I0129 08:52:19.430128 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-vdbl7" podStartSLOduration=2.273344392 podStartE2EDuration="6.430106984s" podCreationTimestamp="2026-01-29 08:52:13 +0000 UTC" firstStartedPulling="2026-01-29 08:52:14.50087144 +0000 UTC m=+815.000459392" lastFinishedPulling="2026-01-29 08:52:18.657634042 +0000 UTC m=+819.157221984" observedRunningTime="2026-01-29 08:52:19.426853016 +0000 UTC m=+819.926440988" watchObservedRunningTime="2026-01-29 08:52:19.430106984 +0000 UTC m=+819.929694936" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.602049 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-w2269"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.603192 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.605742 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mvr6v" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.624013 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-w2269"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.648912 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wzjdc"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.649984 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.660089 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.662119 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.665015 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.674104 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.768109 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.769069 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.771635 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.772133 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.773463 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-p2rt4" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.784128 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.786086 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2a6126a5-5e52-418a-ba32-ce426e8ce58c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-scf9x\" (UID: \"2a6126a5-5e52-418a-ba32-ce426e8ce58c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.786121 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zs4\" (UniqueName: \"kubernetes.io/projected/27616237-18b5-463e-be46-59392bbff884-kube-api-access-26zs4\") pod \"nmstate-metrics-54757c584b-w2269\" (UID: \"27616237-18b5-463e-be46-59392bbff884\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.786146 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-ovs-socket\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.786200 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-dbus-socket\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.786217 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-nmstate-lock\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.786256 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tncj\" (UniqueName: \"kubernetes.io/projected/21eadbd2-15f3-47aa-8428-fb22325e29a6-kube-api-access-5tncj\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.786287 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjzj\" (UniqueName: \"kubernetes.io/projected/2a6126a5-5e52-418a-ba32-ce426e8ce58c-kube-api-access-nxjzj\") pod \"nmstate-webhook-8474b5b9d8-scf9x\" (UID: \"2a6126a5-5e52-418a-ba32-ce426e8ce58c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887364 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tncj\" (UniqueName: \"kubernetes.io/projected/21eadbd2-15f3-47aa-8428-fb22325e29a6-kube-api-access-5tncj\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887421 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxjzj\" (UniqueName: \"kubernetes.io/projected/2a6126a5-5e52-418a-ba32-ce426e8ce58c-kube-api-access-nxjzj\") pod \"nmstate-webhook-8474b5b9d8-scf9x\" (UID: \"2a6126a5-5e52-418a-ba32-ce426e8ce58c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887456 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2a6126a5-5e52-418a-ba32-ce426e8ce58c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-scf9x\" (UID: \"2a6126a5-5e52-418a-ba32-ce426e8ce58c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887473 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26zs4\" (UniqueName: \"kubernetes.io/projected/27616237-18b5-463e-be46-59392bbff884-kube-api-access-26zs4\") pod \"nmstate-metrics-54757c584b-w2269\" (UID: \"27616237-18b5-463e-be46-59392bbff884\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887488 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-ovs-socket\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887515 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5c55f203-c18f-402b-a766-a1f291a5b3dc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887552 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-dbus-socket\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887573 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-nmstate-lock\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887588 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c55f203-c18f-402b-a766-a1f291a5b3dc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.887608 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9jq\" (UniqueName: \"kubernetes.io/projected/5c55f203-c18f-402b-a766-a1f291a5b3dc-kube-api-access-ks9jq\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.888587 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-ovs-socket\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.888817 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-nmstate-lock\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.888928 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/21eadbd2-15f3-47aa-8428-fb22325e29a6-dbus-socket\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.894676 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2a6126a5-5e52-418a-ba32-ce426e8ce58c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-scf9x\" (UID: \"2a6126a5-5e52-418a-ba32-ce426e8ce58c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.910347 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26zs4\" (UniqueName: \"kubernetes.io/projected/27616237-18b5-463e-be46-59392bbff884-kube-api-access-26zs4\") pod \"nmstate-metrics-54757c584b-w2269\" (UID: \"27616237-18b5-463e-be46-59392bbff884\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.912401 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxjzj\" (UniqueName: \"kubernetes.io/projected/2a6126a5-5e52-418a-ba32-ce426e8ce58c-kube-api-access-nxjzj\") pod \"nmstate-webhook-8474b5b9d8-scf9x\" (UID: \"2a6126a5-5e52-418a-ba32-ce426e8ce58c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.914926 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tncj\" (UniqueName: \"kubernetes.io/projected/21eadbd2-15f3-47aa-8428-fb22325e29a6-kube-api-access-5tncj\") pod \"nmstate-handler-wzjdc\" (UID: \"21eadbd2-15f3-47aa-8428-fb22325e29a6\") " pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.918410 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.966973 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.984790 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.985567 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dbbbdb4c9-vmvrb"] Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.986193 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:23 crc kubenswrapper[5031]: I0129 08:52:23.994361 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5c55f203-c18f-402b-a766-a1f291a5b3dc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.010612 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dbbbdb4c9-vmvrb"] Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.012604 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c55f203-c18f-402b-a766-a1f291a5b3dc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.012656 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9jq\" (UniqueName: \"kubernetes.io/projected/5c55f203-c18f-402b-a766-a1f291a5b3dc-kube-api-access-ks9jq\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.013228 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5c55f203-c18f-402b-a766-a1f291a5b3dc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.023431 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c55f203-c18f-402b-a766-a1f291a5b3dc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.040583 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9jq\" (UniqueName: \"kubernetes.io/projected/5c55f203-c18f-402b-a766-a1f291a5b3dc-kube-api-access-ks9jq\") pod \"nmstate-console-plugin-7754f76f8b-gcrhb\" (UID: \"5c55f203-c18f-402b-a766-a1f291a5b3dc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.081642 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.113733 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de4a9e12-b569-48e8-8c22-c88481e7a973-console-serving-cert\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.114081 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-oauth-serving-cert\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.114115 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk4m5\" (UniqueName: \"kubernetes.io/projected/de4a9e12-b569-48e8-8c22-c88481e7a973-kube-api-access-rk4m5\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.114143 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-console-config\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.114170 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de4a9e12-b569-48e8-8c22-c88481e7a973-console-oauth-config\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.115631 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-service-ca\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.115668 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-trusted-ca-bundle\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.216635 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk4m5\" (UniqueName: \"kubernetes.io/projected/de4a9e12-b569-48e8-8c22-c88481e7a973-kube-api-access-rk4m5\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.216692 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-console-config\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.216722 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de4a9e12-b569-48e8-8c22-c88481e7a973-console-oauth-config\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.216739 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-service-ca\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.216760 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-trusted-ca-bundle\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.216797 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de4a9e12-b569-48e8-8c22-c88481e7a973-console-serving-cert\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.216814 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-oauth-serving-cert\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.218068 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-oauth-serving-cert\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.219807 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-service-ca\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.221181 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-console-config\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.222190 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4a9e12-b569-48e8-8c22-c88481e7a973-trusted-ca-bundle\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.225106 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de4a9e12-b569-48e8-8c22-c88481e7a973-console-serving-cert\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.225971 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de4a9e12-b569-48e8-8c22-c88481e7a973-console-oauth-config\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.234267 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk4m5\" (UniqueName: \"kubernetes.io/projected/de4a9e12-b569-48e8-8c22-c88481e7a973-kube-api-access-rk4m5\") pod \"console-5dbbbdb4c9-vmvrb\" (UID: \"de4a9e12-b569-48e8-8c22-c88481e7a973\") " pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.258288 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-w2269"] Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.312496 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.445049 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb"] Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.447507 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" event={"ID":"27616237-18b5-463e-be46-59392bbff884","Type":"ContainerStarted","Data":"eeb59a7a14c9a42c7797d09ce1b19c5a122f8addbf8594e4af23a18cbcd0350d"} Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.449502 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wzjdc" event={"ID":"21eadbd2-15f3-47aa-8428-fb22325e29a6","Type":"ContainerStarted","Data":"1e1efae8b70744cdc9062c84d09208d887ad1cfa66ab72270cabce355dba66ac"} Jan 29 08:52:24 crc kubenswrapper[5031]: W0129 08:52:24.451790 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c55f203_c18f_402b_a766_a1f291a5b3dc.slice/crio-ff99d1967527ab0afebe8751405697a18307ef93e2f563a97c49290c059e95cb WatchSource:0}: Error finding container ff99d1967527ab0afebe8751405697a18307ef93e2f563a97c49290c059e95cb: Status 404 returned error can't find the container with id ff99d1967527ab0afebe8751405697a18307ef93e2f563a97c49290c059e95cb Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.527077 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dbbbdb4c9-vmvrb"] Jan 29 08:52:24 crc kubenswrapper[5031]: W0129 08:52:24.532697 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde4a9e12_b569_48e8_8c22_c88481e7a973.slice/crio-4a07e37bbe06706468ea328a129604eeafbb06fb750478cedf49fe802cf6b372 WatchSource:0}: Error finding container 4a07e37bbe06706468ea328a129604eeafbb06fb750478cedf49fe802cf6b372: Status 404 returned error can't find the container with id 4a07e37bbe06706468ea328a129604eeafbb06fb750478cedf49fe802cf6b372 Jan 29 08:52:24 crc kubenswrapper[5031]: I0129 08:52:24.568638 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x"] Jan 29 08:52:24 crc kubenswrapper[5031]: W0129 08:52:24.570323 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6126a5_5e52_418a_ba32_ce426e8ce58c.slice/crio-979c44e1b033f1eb68c1e18d0f20d2d9aea2af76933944089d51c7a48d5d7f62 WatchSource:0}: Error finding container 979c44e1b033f1eb68c1e18d0f20d2d9aea2af76933944089d51c7a48d5d7f62: Status 404 returned error can't find the container with id 979c44e1b033f1eb68c1e18d0f20d2d9aea2af76933944089d51c7a48d5d7f62 Jan 29 08:52:25 crc kubenswrapper[5031]: I0129 08:52:25.455417 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" event={"ID":"5c55f203-c18f-402b-a766-a1f291a5b3dc","Type":"ContainerStarted","Data":"ff99d1967527ab0afebe8751405697a18307ef93e2f563a97c49290c059e95cb"} Jan 29 08:52:25 crc kubenswrapper[5031]: I0129 08:52:25.457537 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dbbbdb4c9-vmvrb" event={"ID":"de4a9e12-b569-48e8-8c22-c88481e7a973","Type":"ContainerStarted","Data":"4a07e37bbe06706468ea328a129604eeafbb06fb750478cedf49fe802cf6b372"} Jan 29 08:52:25 crc kubenswrapper[5031]: I0129 08:52:25.458434 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" event={"ID":"2a6126a5-5e52-418a-ba32-ce426e8ce58c","Type":"ContainerStarted","Data":"979c44e1b033f1eb68c1e18d0f20d2d9aea2af76933944089d51c7a48d5d7f62"} Jan 29 08:52:25 crc kubenswrapper[5031]: I0129 08:52:25.981877 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:26 crc kubenswrapper[5031]: I0129 08:52:26.039656 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:26 crc kubenswrapper[5031]: I0129 08:52:26.215363 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhgnh"] Jan 29 08:52:26 crc kubenswrapper[5031]: I0129 08:52:26.467553 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dbbbdb4c9-vmvrb" event={"ID":"de4a9e12-b569-48e8-8c22-c88481e7a973","Type":"ContainerStarted","Data":"f4804d08fb151dfe2474c7c279bbcb52120a6f63fbb70983870eff44cfeb1863"} Jan 29 08:52:26 crc kubenswrapper[5031]: I0129 08:52:26.486687 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dbbbdb4c9-vmvrb" podStartSLOduration=3.4866159420000002 podStartE2EDuration="3.486615942s" podCreationTimestamp="2026-01-29 08:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:52:26.485058019 +0000 UTC m=+826.984645981" watchObservedRunningTime="2026-01-29 08:52:26.486615942 +0000 UTC m=+826.986203904" Jan 29 08:52:27 crc kubenswrapper[5031]: I0129 08:52:27.479498 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zhgnh" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="registry-server" containerID="cri-o://1ae52040426139c17d3f9964e2b3bac209f02bb1497820ec2d87e655d1270c01" gracePeriod=2 Jan 29 08:52:28 crc kubenswrapper[5031]: I0129 08:52:28.495760 5031 generic.go:334] "Generic (PLEG): container finished" podID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerID="1ae52040426139c17d3f9964e2b3bac209f02bb1497820ec2d87e655d1270c01" exitCode=0 Jan 29 08:52:28 crc kubenswrapper[5031]: I0129 08:52:28.495996 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhgnh" event={"ID":"aad60a3f-3cc5-40a9-ac0a-c2e589939bca","Type":"ContainerDied","Data":"1ae52040426139c17d3f9964e2b3bac209f02bb1497820ec2d87e655d1270c01"} Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.283483 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.401833 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-catalog-content\") pod \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.401884 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-utilities\") pod \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.401970 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-kube-api-access-bxv5t\") pod \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\" (UID: \"aad60a3f-3cc5-40a9-ac0a-c2e589939bca\") " Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.403104 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-utilities" (OuterVolumeSpecName: "utilities") pod "aad60a3f-3cc5-40a9-ac0a-c2e589939bca" (UID: "aad60a3f-3cc5-40a9-ac0a-c2e589939bca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.408269 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-kube-api-access-bxv5t" (OuterVolumeSpecName: "kube-api-access-bxv5t") pod "aad60a3f-3cc5-40a9-ac0a-c2e589939bca" (UID: "aad60a3f-3cc5-40a9-ac0a-c2e589939bca"). InnerVolumeSpecName "kube-api-access-bxv5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.502947 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-kube-api-access-bxv5t\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.502988 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.510688 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhgnh" event={"ID":"aad60a3f-3cc5-40a9-ac0a-c2e589939bca","Type":"ContainerDied","Data":"e4c3b5c461e9ce0819a71e963233fa0f0f0e53eff81b9997c41b89b6bdf74514"} Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.510778 5031 scope.go:117] "RemoveContainer" containerID="1ae52040426139c17d3f9964e2b3bac209f02bb1497820ec2d87e655d1270c01" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.510705 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhgnh" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.514112 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" event={"ID":"2a6126a5-5e52-418a-ba32-ce426e8ce58c","Type":"ContainerStarted","Data":"763bbbd5b495fe9e93fdd45d275f66ba8775b10ee50ad04e0676daf5f6744f48"} Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.515039 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.517901 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aad60a3f-3cc5-40a9-ac0a-c2e589939bca" (UID: "aad60a3f-3cc5-40a9-ac0a-c2e589939bca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.518259 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" event={"ID":"27616237-18b5-463e-be46-59392bbff884","Type":"ContainerStarted","Data":"412e1a94d07781c3fa449b60d78364c7874ade6f03b183a2dc48c85a0185d378"} Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.520037 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" event={"ID":"5c55f203-c18f-402b-a766-a1f291a5b3dc","Type":"ContainerStarted","Data":"58042d7412927382edf7022eb9062cf50100391c4894693fb685413c3fbf7b08"} Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.521617 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wzjdc" event={"ID":"21eadbd2-15f3-47aa-8428-fb22325e29a6","Type":"ContainerStarted","Data":"d0cb98b79e0f5cc263dba4a0d98393ca0d42b7a992d9a7401dd811dd1e7c93b5"} Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.521738 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.534758 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" podStartSLOduration=2.061408844 podStartE2EDuration="7.534739965s" podCreationTimestamp="2026-01-29 08:52:23 +0000 UTC" firstStartedPulling="2026-01-29 08:52:24.577380557 +0000 UTC m=+825.076968509" lastFinishedPulling="2026-01-29 08:52:30.050711678 +0000 UTC m=+830.550299630" observedRunningTime="2026-01-29 08:52:30.528032994 +0000 UTC m=+831.027620946" watchObservedRunningTime="2026-01-29 08:52:30.534739965 +0000 UTC m=+831.034327917" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.535830 5031 scope.go:117] "RemoveContainer" containerID="4ae1cffd2f8f8b80769741b863d09ac01bd147d4c507b2808e3bc0b4eabac403" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.552567 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wzjdc" podStartSLOduration=1.589069731 podStartE2EDuration="7.552548792s" podCreationTimestamp="2026-01-29 08:52:23 +0000 UTC" firstStartedPulling="2026-01-29 08:52:24.061065065 +0000 UTC m=+824.560653017" lastFinishedPulling="2026-01-29 08:52:30.024544126 +0000 UTC m=+830.524132078" observedRunningTime="2026-01-29 08:52:30.550047035 +0000 UTC m=+831.049634987" watchObservedRunningTime="2026-01-29 08:52:30.552548792 +0000 UTC m=+831.052136744" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.560731 5031 scope.go:117] "RemoveContainer" containerID="bfb1d15e36a0bfd1fb26e15911dd4d9a339370bd044c93f1b75f775df68ed8ff" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.570331 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gcrhb" podStartSLOduration=1.9977043399999999 podStartE2EDuration="7.570307479s" podCreationTimestamp="2026-01-29 08:52:23 +0000 UTC" firstStartedPulling="2026-01-29 08:52:24.454118579 +0000 UTC m=+824.953706521" lastFinishedPulling="2026-01-29 08:52:30.026721698 +0000 UTC m=+830.526309660" observedRunningTime="2026-01-29 08:52:30.566040564 +0000 UTC m=+831.065628516" watchObservedRunningTime="2026-01-29 08:52:30.570307479 +0000 UTC m=+831.069895431" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.604731 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad60a3f-3cc5-40a9-ac0a-c2e589939bca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.835750 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhgnh"] Jan 29 08:52:30 crc kubenswrapper[5031]: I0129 08:52:30.844934 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zhgnh"] Jan 29 08:52:32 crc kubenswrapper[5031]: I0129 08:52:32.292284 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" path="/var/lib/kubelet/pods/aad60a3f-3cc5-40a9-ac0a-c2e589939bca/volumes" Jan 29 08:52:32 crc kubenswrapper[5031]: I0129 08:52:32.539300 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" event={"ID":"27616237-18b5-463e-be46-59392bbff884","Type":"ContainerStarted","Data":"5f467486182d77a8b38b6405423eb6e5d654d4751d9cb965dc33efe3766d74a9"} Jan 29 08:52:32 crc kubenswrapper[5031]: I0129 08:52:32.560102 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-w2269" podStartSLOduration=1.5977177729999998 podStartE2EDuration="9.560084882s" podCreationTimestamp="2026-01-29 08:52:23 +0000 UTC" firstStartedPulling="2026-01-29 08:52:24.275760375 +0000 UTC m=+824.775348327" lastFinishedPulling="2026-01-29 08:52:32.238127484 +0000 UTC m=+832.737715436" observedRunningTime="2026-01-29 08:52:32.557828147 +0000 UTC m=+833.057416119" watchObservedRunningTime="2026-01-29 08:52:32.560084882 +0000 UTC m=+833.059672834" Jan 29 08:52:34 crc kubenswrapper[5031]: I0129 08:52:34.313242 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:34 crc kubenswrapper[5031]: I0129 08:52:34.313630 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:34 crc kubenswrapper[5031]: I0129 08:52:34.317549 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:34 crc kubenswrapper[5031]: I0129 08:52:34.554120 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dbbbdb4c9-vmvrb" Jan 29 08:52:34 crc kubenswrapper[5031]: I0129 08:52:34.606020 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lbjm4"] Jan 29 08:52:38 crc kubenswrapper[5031]: I0129 08:52:38.992996 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wzjdc" Jan 29 08:52:43 crc kubenswrapper[5031]: I0129 08:52:43.993651 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-scf9x" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.087534 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4"] Jan 29 08:52:57 crc kubenswrapper[5031]: E0129 08:52:57.089536 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="registry-server" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.089619 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="registry-server" Jan 29 08:52:57 crc kubenswrapper[5031]: E0129 08:52:57.089691 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="extract-content" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.089764 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="extract-content" Jan 29 08:52:57 crc kubenswrapper[5031]: E0129 08:52:57.089842 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="extract-utilities" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.089905 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="extract-utilities" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.090114 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad60a3f-3cc5-40a9-ac0a-c2e589939bca" containerName="registry-server" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.091108 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.095533 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.114916 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4"] Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.149100 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.149551 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.149612 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfbcx\" (UniqueName: \"kubernetes.io/projected/1f48659c-8c60-4f11-b68f-596ddf2d1b73-kube-api-access-cfbcx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.250017 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfbcx\" (UniqueName: \"kubernetes.io/projected/1f48659c-8c60-4f11-b68f-596ddf2d1b73-kube-api-access-cfbcx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.250365 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.250515 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.251468 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.251770 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.272761 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfbcx\" (UniqueName: \"kubernetes.io/projected/1f48659c-8c60-4f11-b68f-596ddf2d1b73-kube-api-access-cfbcx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.455940 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:52:57 crc kubenswrapper[5031]: I0129 08:52:57.670282 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4"] Jan 29 08:52:58 crc kubenswrapper[5031]: I0129 08:52:58.686481 5031 generic.go:334] "Generic (PLEG): container finished" podID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerID="f15497932363cef897e6c2206a469f8a2cf505f2eb18cee9bda905a9d7ef6a3f" exitCode=0 Jan 29 08:52:58 crc kubenswrapper[5031]: I0129 08:52:58.686594 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" event={"ID":"1f48659c-8c60-4f11-b68f-596ddf2d1b73","Type":"ContainerDied","Data":"f15497932363cef897e6c2206a469f8a2cf505f2eb18cee9bda905a9d7ef6a3f"} Jan 29 08:52:58 crc kubenswrapper[5031]: I0129 08:52:58.686887 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" event={"ID":"1f48659c-8c60-4f11-b68f-596ddf2d1b73","Type":"ContainerStarted","Data":"1b56b54613efba588285ec3681f2f170540b59c714f2287704ad0104947caa30"} Jan 29 08:52:59 crc kubenswrapper[5031]: I0129 08:52:59.648569 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-lbjm4" podUID="f07acf69-4876-413e-b098-b7074c7018c2" containerName="console" containerID="cri-o://a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489" gracePeriod=15 Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.061072 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lbjm4_f07acf69-4876-413e-b098-b7074c7018c2/console/0.log" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.061169 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.189130 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-trusted-ca-bundle\") pod \"f07acf69-4876-413e-b098-b7074c7018c2\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.189479 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-oauth-serving-cert\") pod \"f07acf69-4876-413e-b098-b7074c7018c2\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.189523 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dbvl\" (UniqueName: \"kubernetes.io/projected/f07acf69-4876-413e-b098-b7074c7018c2-kube-api-access-6dbvl\") pod \"f07acf69-4876-413e-b098-b7074c7018c2\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.189542 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-console-config\") pod \"f07acf69-4876-413e-b098-b7074c7018c2\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.189567 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-serving-cert\") pod \"f07acf69-4876-413e-b098-b7074c7018c2\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.189613 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-service-ca\") pod \"f07acf69-4876-413e-b098-b7074c7018c2\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.189644 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-oauth-config\") pod \"f07acf69-4876-413e-b098-b7074c7018c2\" (UID: \"f07acf69-4876-413e-b098-b7074c7018c2\") " Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.190232 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f07acf69-4876-413e-b098-b7074c7018c2" (UID: "f07acf69-4876-413e-b098-b7074c7018c2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.190282 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-service-ca" (OuterVolumeSpecName: "service-ca") pod "f07acf69-4876-413e-b098-b7074c7018c2" (UID: "f07acf69-4876-413e-b098-b7074c7018c2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.190290 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-console-config" (OuterVolumeSpecName: "console-config") pod "f07acf69-4876-413e-b098-b7074c7018c2" (UID: "f07acf69-4876-413e-b098-b7074c7018c2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.190653 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f07acf69-4876-413e-b098-b7074c7018c2" (UID: "f07acf69-4876-413e-b098-b7074c7018c2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.196176 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f07acf69-4876-413e-b098-b7074c7018c2" (UID: "f07acf69-4876-413e-b098-b7074c7018c2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.198882 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f07acf69-4876-413e-b098-b7074c7018c2" (UID: "f07acf69-4876-413e-b098-b7074c7018c2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.198894 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07acf69-4876-413e-b098-b7074c7018c2-kube-api-access-6dbvl" (OuterVolumeSpecName: "kube-api-access-6dbvl") pod "f07acf69-4876-413e-b098-b7074c7018c2" (UID: "f07acf69-4876-413e-b098-b7074c7018c2"). InnerVolumeSpecName "kube-api-access-6dbvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.291120 5031 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.291158 5031 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.291171 5031 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.291183 5031 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.291196 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dbvl\" (UniqueName: \"kubernetes.io/projected/f07acf69-4876-413e-b098-b7074c7018c2-kube-api-access-6dbvl\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.291209 5031 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f07acf69-4876-413e-b098-b7074c7018c2-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.291219 5031 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f07acf69-4876-413e-b098-b7074c7018c2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.700062 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lbjm4_f07acf69-4876-413e-b098-b7074c7018c2/console/0.log" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.700108 5031 generic.go:334] "Generic (PLEG): container finished" podID="f07acf69-4876-413e-b098-b7074c7018c2" containerID="a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489" exitCode=2 Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.700161 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbjm4" event={"ID":"f07acf69-4876-413e-b098-b7074c7018c2","Type":"ContainerDied","Data":"a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489"} Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.700160 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbjm4" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.700183 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbjm4" event={"ID":"f07acf69-4876-413e-b098-b7074c7018c2","Type":"ContainerDied","Data":"f3d9efe03cd5860068bc480a71c4b6263e4ec93317ae88038dcb8852f78910c5"} Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.700197 5031 scope.go:117] "RemoveContainer" containerID="a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.703039 5031 generic.go:334] "Generic (PLEG): container finished" podID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerID="1916356d5256c3259897c63b321ada53a5d2ca3660813070263a7e67d9626c62" exitCode=0 Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.703077 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" event={"ID":"1f48659c-8c60-4f11-b68f-596ddf2d1b73","Type":"ContainerDied","Data":"1916356d5256c3259897c63b321ada53a5d2ca3660813070263a7e67d9626c62"} Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.719172 5031 scope.go:117] "RemoveContainer" containerID="a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489" Jan 29 08:53:00 crc kubenswrapper[5031]: E0129 08:53:00.719617 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489\": container with ID starting with a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489 not found: ID does not exist" containerID="a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.719661 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489"} err="failed to get container status \"a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489\": rpc error: code = NotFound desc = could not find container \"a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489\": container with ID starting with a5f7d7a7b32dadd83233cf30114481f237d104c22796e8d1b75e58061e7cf489 not found: ID does not exist" Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.719705 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lbjm4"] Jan 29 08:53:00 crc kubenswrapper[5031]: I0129 08:53:00.723303 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-lbjm4"] Jan 29 08:53:01 crc kubenswrapper[5031]: I0129 08:53:01.711343 5031 generic.go:334] "Generic (PLEG): container finished" podID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerID="937a0c05253683db1e56cc8761a331aa9241449dfac62809490da2011f2d710a" exitCode=0 Jan 29 08:53:01 crc kubenswrapper[5031]: I0129 08:53:01.711417 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" event={"ID":"1f48659c-8c60-4f11-b68f-596ddf2d1b73","Type":"ContainerDied","Data":"937a0c05253683db1e56cc8761a331aa9241449dfac62809490da2011f2d710a"} Jan 29 08:53:02 crc kubenswrapper[5031]: I0129 08:53:02.290138 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07acf69-4876-413e-b098-b7074c7018c2" path="/var/lib/kubelet/pods/f07acf69-4876-413e-b098-b7074c7018c2/volumes" Jan 29 08:53:02 crc kubenswrapper[5031]: I0129 08:53:02.960462 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.129852 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbcx\" (UniqueName: \"kubernetes.io/projected/1f48659c-8c60-4f11-b68f-596ddf2d1b73-kube-api-access-cfbcx\") pod \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.129938 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-bundle\") pod \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.129993 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-util\") pod \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\" (UID: \"1f48659c-8c60-4f11-b68f-596ddf2d1b73\") " Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.131251 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-bundle" (OuterVolumeSpecName: "bundle") pod "1f48659c-8c60-4f11-b68f-596ddf2d1b73" (UID: "1f48659c-8c60-4f11-b68f-596ddf2d1b73"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.138409 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f48659c-8c60-4f11-b68f-596ddf2d1b73-kube-api-access-cfbcx" (OuterVolumeSpecName: "kube-api-access-cfbcx") pod "1f48659c-8c60-4f11-b68f-596ddf2d1b73" (UID: "1f48659c-8c60-4f11-b68f-596ddf2d1b73"). InnerVolumeSpecName "kube-api-access-cfbcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.231308 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbcx\" (UniqueName: \"kubernetes.io/projected/1f48659c-8c60-4f11-b68f-596ddf2d1b73-kube-api-access-cfbcx\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.231347 5031 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.542703 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-util" (OuterVolumeSpecName: "util") pod "1f48659c-8c60-4f11-b68f-596ddf2d1b73" (UID: "1f48659c-8c60-4f11-b68f-596ddf2d1b73"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.637254 5031 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f48659c-8c60-4f11-b68f-596ddf2d1b73-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.731153 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" event={"ID":"1f48659c-8c60-4f11-b68f-596ddf2d1b73","Type":"ContainerDied","Data":"1b56b54613efba588285ec3681f2f170540b59c714f2287704ad0104947caa30"} Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.731211 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b56b54613efba588285ec3681f2f170540b59c714f2287704ad0104947caa30" Jan 29 08:53:03 crc kubenswrapper[5031]: I0129 08:53:03.731277 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.962188 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l"] Jan 29 08:53:11 crc kubenswrapper[5031]: E0129 08:53:11.963033 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerName="extract" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.963049 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerName="extract" Jan 29 08:53:11 crc kubenswrapper[5031]: E0129 08:53:11.963059 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerName="util" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.963066 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerName="util" Jan 29 08:53:11 crc kubenswrapper[5031]: E0129 08:53:11.963085 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerName="pull" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.963093 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerName="pull" Jan 29 08:53:11 crc kubenswrapper[5031]: E0129 08:53:11.963106 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07acf69-4876-413e-b098-b7074c7018c2" containerName="console" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.963113 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07acf69-4876-413e-b098-b7074c7018c2" containerName="console" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.963227 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07acf69-4876-413e-b098-b7074c7018c2" containerName="console" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.963237 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f48659c-8c60-4f11-b68f-596ddf2d1b73" containerName="extract" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.963744 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.965426 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9cd72" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.965545 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.965640 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.965691 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.965762 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 08:53:11 crc kubenswrapper[5031]: I0129 08:53:11.985181 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l"] Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.060863 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/417f7fc8-934e-415e-89cc-fb09ba21917e-apiservice-cert\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.060916 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/417f7fc8-934e-415e-89cc-fb09ba21917e-webhook-cert\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.060966 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htm2b\" (UniqueName: \"kubernetes.io/projected/417f7fc8-934e-415e-89cc-fb09ba21917e-kube-api-access-htm2b\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.161829 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/417f7fc8-934e-415e-89cc-fb09ba21917e-apiservice-cert\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.161872 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/417f7fc8-934e-415e-89cc-fb09ba21917e-webhook-cert\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.161920 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htm2b\" (UniqueName: \"kubernetes.io/projected/417f7fc8-934e-415e-89cc-fb09ba21917e-kube-api-access-htm2b\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.167438 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/417f7fc8-934e-415e-89cc-fb09ba21917e-apiservice-cert\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.167454 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/417f7fc8-934e-415e-89cc-fb09ba21917e-webhook-cert\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.182705 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htm2b\" (UniqueName: \"kubernetes.io/projected/417f7fc8-934e-415e-89cc-fb09ba21917e-kube-api-access-htm2b\") pod \"metallb-operator-controller-manager-7777f7948d-dxh4l\" (UID: \"417f7fc8-934e-415e-89cc-fb09ba21917e\") " pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.219846 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx"] Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.220757 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.223247 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2rh8c" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.223959 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.225027 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.235151 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx"] Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.263515 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-apiservice-cert\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.263567 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkrw\" (UniqueName: \"kubernetes.io/projected/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-kube-api-access-rvkrw\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.263759 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-webhook-cert\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.279330 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.368053 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-webhook-cert\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.368458 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-apiservice-cert\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.368484 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkrw\" (UniqueName: \"kubernetes.io/projected/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-kube-api-access-rvkrw\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.378391 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-apiservice-cert\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.380165 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-webhook-cert\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.389489 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkrw\" (UniqueName: \"kubernetes.io/projected/729c722e-e67a-4ff6-a4cf-0f6a68fd2c66-kube-api-access-rvkrw\") pod \"metallb-operator-webhook-server-7d7d76dfc-zj8mx\" (UID: \"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66\") " pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.540670 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.596638 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l"] Jan 29 08:53:12 crc kubenswrapper[5031]: I0129 08:53:12.787002 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" event={"ID":"417f7fc8-934e-415e-89cc-fb09ba21917e","Type":"ContainerStarted","Data":"f826f9f0039036c25c0da35b5d1795f6800a269dab082c567cb00139906bc81f"} Jan 29 08:53:13 crc kubenswrapper[5031]: I0129 08:53:13.059192 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx"] Jan 29 08:53:13 crc kubenswrapper[5031]: W0129 08:53:13.068250 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod729c722e_e67a_4ff6_a4cf_0f6a68fd2c66.slice/crio-440db322c285d22f7ad37ad341d3c297f3c54f87b89a3590f65e1e90807bd975 WatchSource:0}: Error finding container 440db322c285d22f7ad37ad341d3c297f3c54f87b89a3590f65e1e90807bd975: Status 404 returned error can't find the container with id 440db322c285d22f7ad37ad341d3c297f3c54f87b89a3590f65e1e90807bd975 Jan 29 08:53:13 crc kubenswrapper[5031]: I0129 08:53:13.793920 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" event={"ID":"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66","Type":"ContainerStarted","Data":"440db322c285d22f7ad37ad341d3c297f3c54f87b89a3590f65e1e90807bd975"} Jan 29 08:53:16 crc kubenswrapper[5031]: I0129 08:53:16.817186 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" event={"ID":"417f7fc8-934e-415e-89cc-fb09ba21917e","Type":"ContainerStarted","Data":"f0e619e99ea9b998d9ec00da9b3c476edd7af1d8044dc310e254f315373cafcf"} Jan 29 08:53:16 crc kubenswrapper[5031]: I0129 08:53:16.818123 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:16 crc kubenswrapper[5031]: I0129 08:53:16.844578 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" podStartSLOduration=1.914043892 podStartE2EDuration="5.844559482s" podCreationTimestamp="2026-01-29 08:53:11 +0000 UTC" firstStartedPulling="2026-01-29 08:53:12.623065143 +0000 UTC m=+873.122653105" lastFinishedPulling="2026-01-29 08:53:16.553580743 +0000 UTC m=+877.053168695" observedRunningTime="2026-01-29 08:53:16.841673256 +0000 UTC m=+877.341261228" watchObservedRunningTime="2026-01-29 08:53:16.844559482 +0000 UTC m=+877.344147434" Jan 29 08:53:19 crc kubenswrapper[5031]: I0129 08:53:19.845855 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" event={"ID":"729c722e-e67a-4ff6-a4cf-0f6a68fd2c66","Type":"ContainerStarted","Data":"10055ac5c5313e122bd4546a515b696da0ecb1f0dd95ff901792ec890f20a38a"} Jan 29 08:53:19 crc kubenswrapper[5031]: I0129 08:53:19.847651 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:19 crc kubenswrapper[5031]: I0129 08:53:19.873730 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" podStartSLOduration=2.116421037 podStartE2EDuration="7.873714604s" podCreationTimestamp="2026-01-29 08:53:12 +0000 UTC" firstStartedPulling="2026-01-29 08:53:13.071452622 +0000 UTC m=+873.571040574" lastFinishedPulling="2026-01-29 08:53:18.828746189 +0000 UTC m=+879.328334141" observedRunningTime="2026-01-29 08:53:19.870449746 +0000 UTC m=+880.370044509" watchObservedRunningTime="2026-01-29 08:53:19.873714604 +0000 UTC m=+880.373302556" Jan 29 08:53:32 crc kubenswrapper[5031]: I0129 08:53:32.545126 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7d7d76dfc-zj8mx" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.099014 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bwlrd"] Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.100723 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.104039 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwlrd"] Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.300304 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-utilities\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.300536 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjfc\" (UniqueName: \"kubernetes.io/projected/187b6311-0e43-41c5-9663-246cd1cd260b-kube-api-access-rgjfc\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.300664 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-catalog-content\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.402055 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-utilities\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.402527 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgjfc\" (UniqueName: \"kubernetes.io/projected/187b6311-0e43-41c5-9663-246cd1cd260b-kube-api-access-rgjfc\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.402686 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-catalog-content\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.402895 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-utilities\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.403101 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-catalog-content\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.429455 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgjfc\" (UniqueName: \"kubernetes.io/projected/187b6311-0e43-41c5-9663-246cd1cd260b-kube-api-access-rgjfc\") pod \"community-operators-bwlrd\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.725567 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:35 crc kubenswrapper[5031]: I0129 08:53:35.972016 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwlrd"] Jan 29 08:53:35 crc kubenswrapper[5031]: W0129 08:53:35.984440 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod187b6311_0e43_41c5_9663_246cd1cd260b.slice/crio-6334d8ef86121eb05e7bb13cb70053c0b93ab164f2919cd433c1f8d986051a12 WatchSource:0}: Error finding container 6334d8ef86121eb05e7bb13cb70053c0b93ab164f2919cd433c1f8d986051a12: Status 404 returned error can't find the container with id 6334d8ef86121eb05e7bb13cb70053c0b93ab164f2919cd433c1f8d986051a12 Jan 29 08:53:36 crc kubenswrapper[5031]: I0129 08:53:36.952918 5031 generic.go:334] "Generic (PLEG): container finished" podID="187b6311-0e43-41c5-9663-246cd1cd260b" containerID="37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7" exitCode=0 Jan 29 08:53:36 crc kubenswrapper[5031]: I0129 08:53:36.953006 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwlrd" event={"ID":"187b6311-0e43-41c5-9663-246cd1cd260b","Type":"ContainerDied","Data":"37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7"} Jan 29 08:53:36 crc kubenswrapper[5031]: I0129 08:53:36.953550 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwlrd" event={"ID":"187b6311-0e43-41c5-9663-246cd1cd260b","Type":"ContainerStarted","Data":"6334d8ef86121eb05e7bb13cb70053c0b93ab164f2919cd433c1f8d986051a12"} Jan 29 08:53:37 crc kubenswrapper[5031]: I0129 08:53:37.964116 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwlrd" event={"ID":"187b6311-0e43-41c5-9663-246cd1cd260b","Type":"ContainerStarted","Data":"be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad"} Jan 29 08:53:38 crc kubenswrapper[5031]: I0129 08:53:38.494005 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:53:38 crc kubenswrapper[5031]: I0129 08:53:38.494065 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:53:38 crc kubenswrapper[5031]: I0129 08:53:38.972484 5031 generic.go:334] "Generic (PLEG): container finished" podID="187b6311-0e43-41c5-9663-246cd1cd260b" containerID="be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad" exitCode=0 Jan 29 08:53:38 crc kubenswrapper[5031]: I0129 08:53:38.972523 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwlrd" event={"ID":"187b6311-0e43-41c5-9663-246cd1cd260b","Type":"ContainerDied","Data":"be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad"} Jan 29 08:53:39 crc kubenswrapper[5031]: I0129 08:53:39.980996 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwlrd" event={"ID":"187b6311-0e43-41c5-9663-246cd1cd260b","Type":"ContainerStarted","Data":"2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076"} Jan 29 08:53:40 crc kubenswrapper[5031]: I0129 08:53:40.002277 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bwlrd" podStartSLOduration=2.608543742 podStartE2EDuration="5.002254858s" podCreationTimestamp="2026-01-29 08:53:35 +0000 UTC" firstStartedPulling="2026-01-29 08:53:36.954455744 +0000 UTC m=+897.454043696" lastFinishedPulling="2026-01-29 08:53:39.34816686 +0000 UTC m=+899.847754812" observedRunningTime="2026-01-29 08:53:39.999344172 +0000 UTC m=+900.498932114" watchObservedRunningTime="2026-01-29 08:53:40.002254858 +0000 UTC m=+900.501842800" Jan 29 08:53:45 crc kubenswrapper[5031]: I0129 08:53:45.726265 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:45 crc kubenswrapper[5031]: I0129 08:53:45.726891 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:45 crc kubenswrapper[5031]: I0129 08:53:45.776378 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:46 crc kubenswrapper[5031]: I0129 08:53:46.066029 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.080104 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bwlrd"] Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.080331 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bwlrd" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="registry-server" containerID="cri-o://2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076" gracePeriod=2 Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.533352 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.587478 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-catalog-content\") pod \"187b6311-0e43-41c5-9663-246cd1cd260b\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.587736 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-utilities\") pod \"187b6311-0e43-41c5-9663-246cd1cd260b\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.587861 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgjfc\" (UniqueName: \"kubernetes.io/projected/187b6311-0e43-41c5-9663-246cd1cd260b-kube-api-access-rgjfc\") pod \"187b6311-0e43-41c5-9663-246cd1cd260b\" (UID: \"187b6311-0e43-41c5-9663-246cd1cd260b\") " Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.588546 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-utilities" (OuterVolumeSpecName: "utilities") pod "187b6311-0e43-41c5-9663-246cd1cd260b" (UID: "187b6311-0e43-41c5-9663-246cd1cd260b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.593508 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187b6311-0e43-41c5-9663-246cd1cd260b-kube-api-access-rgjfc" (OuterVolumeSpecName: "kube-api-access-rgjfc") pod "187b6311-0e43-41c5-9663-246cd1cd260b" (UID: "187b6311-0e43-41c5-9663-246cd1cd260b"). InnerVolumeSpecName "kube-api-access-rgjfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.689353 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:48 crc kubenswrapper[5031]: I0129 08:53:48.689445 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgjfc\" (UniqueName: \"kubernetes.io/projected/187b6311-0e43-41c5-9663-246cd1cd260b-kube-api-access-rgjfc\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.037164 5031 generic.go:334] "Generic (PLEG): container finished" podID="187b6311-0e43-41c5-9663-246cd1cd260b" containerID="2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076" exitCode=0 Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.037230 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwlrd" event={"ID":"187b6311-0e43-41c5-9663-246cd1cd260b","Type":"ContainerDied","Data":"2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076"} Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.037239 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwlrd" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.037267 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwlrd" event={"ID":"187b6311-0e43-41c5-9663-246cd1cd260b","Type":"ContainerDied","Data":"6334d8ef86121eb05e7bb13cb70053c0b93ab164f2919cd433c1f8d986051a12"} Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.037287 5031 scope.go:117] "RemoveContainer" containerID="2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.056505 5031 scope.go:117] "RemoveContainer" containerID="be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.058892 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "187b6311-0e43-41c5-9663-246cd1cd260b" (UID: "187b6311-0e43-41c5-9663-246cd1cd260b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.072818 5031 scope.go:117] "RemoveContainer" containerID="37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.090599 5031 scope.go:117] "RemoveContainer" containerID="2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076" Jan 29 08:53:49 crc kubenswrapper[5031]: E0129 08:53:49.091199 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076\": container with ID starting with 2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076 not found: ID does not exist" containerID="2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.091241 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076"} err="failed to get container status \"2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076\": rpc error: code = NotFound desc = could not find container \"2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076\": container with ID starting with 2f4be11357be0c7fa3f10572d77026751cdcec062fdcfec815763936f7521076 not found: ID does not exist" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.091265 5031 scope.go:117] "RemoveContainer" containerID="be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad" Jan 29 08:53:49 crc kubenswrapper[5031]: E0129 08:53:49.091738 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad\": container with ID starting with be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad not found: ID does not exist" containerID="be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.091792 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad"} err="failed to get container status \"be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad\": rpc error: code = NotFound desc = could not find container \"be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad\": container with ID starting with be7f8df347011ce36068868c59ffc729db57324c8127fb1e63a0d48b579b51ad not found: ID does not exist" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.091820 5031 scope.go:117] "RemoveContainer" containerID="37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7" Jan 29 08:53:49 crc kubenswrapper[5031]: E0129 08:53:49.092242 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7\": container with ID starting with 37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7 not found: ID does not exist" containerID="37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.092279 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7"} err="failed to get container status \"37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7\": rpc error: code = NotFound desc = could not find container \"37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7\": container with ID starting with 37cc9156d62a2638f7875d1bd0a1539c132961b24560084157feeca0ada801d7 not found: ID does not exist" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.094344 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187b6311-0e43-41c5-9663-246cd1cd260b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.368683 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bwlrd"] Jan 29 08:53:49 crc kubenswrapper[5031]: I0129 08:53:49.373902 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bwlrd"] Jan 29 08:53:50 crc kubenswrapper[5031]: I0129 08:53:50.292867 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" path="/var/lib/kubelet/pods/187b6311-0e43-41c5-9663-246cd1cd260b/volumes" Jan 29 08:53:52 crc kubenswrapper[5031]: I0129 08:53:52.289007 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.151449 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn"] Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.151899 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="registry-server" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.151929 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="registry-server" Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.151951 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="extract-content" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.151962 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="extract-content" Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.151989 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="extract-utilities" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.151999 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="extract-utilities" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.152167 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="187b6311-0e43-41c5-9663-246cd1cd260b" containerName="registry-server" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.152838 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.156255 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.156430 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-99ftr"] Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.156755 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2sdd2" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.159742 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.162424 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.163046 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.212779 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn"] Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.247830 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-conf\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.247885 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-reloader\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.247917 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5kkl\" (UniqueName: \"kubernetes.io/projected/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-kube-api-access-l5kkl\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.247950 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4tgl\" (UniqueName: \"kubernetes.io/projected/4fef4c25-5a46-45ba-bc17-fe5696028ac9-kube-api-access-x4tgl\") pod \"frr-k8s-webhook-server-7df86c4f6c-7pdgn\" (UID: \"4fef4c25-5a46-45ba-bc17-fe5696028ac9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.247984 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fef4c25-5a46-45ba-bc17-fe5696028ac9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7pdgn\" (UID: \"4fef4c25-5a46-45ba-bc17-fe5696028ac9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.248001 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-sockets\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.248083 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics-certs\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.248143 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.248172 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-startup\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.248767 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dsws8"] Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.249952 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.252047 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-c7gr7" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.252274 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.252303 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.253281 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.298888 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-2ls2g"] Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.306595 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.309115 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.310608 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-2ls2g"] Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.351336 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-conf\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.351389 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-reloader\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.351413 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5kkl\" (UniqueName: \"kubernetes.io/projected/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-kube-api-access-l5kkl\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352358 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4tgl\" (UniqueName: \"kubernetes.io/projected/4fef4c25-5a46-45ba-bc17-fe5696028ac9-kube-api-access-x4tgl\") pod \"frr-k8s-webhook-server-7df86c4f6c-7pdgn\" (UID: \"4fef4c25-5a46-45ba-bc17-fe5696028ac9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352465 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352480 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-conf\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352496 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-sockets\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352557 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fef4c25-5a46-45ba-bc17-fe5696028ac9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7pdgn\" (UID: \"4fef4c25-5a46-45ba-bc17-fe5696028ac9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352628 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics-certs\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352666 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352692 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-startup\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352721 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metrics-certs\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.352815 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-sockets\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.353068 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-reloader\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.353165 5031 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.353228 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics-certs podName:93ad0b89-0d88-4e18-9f8d-4071a5847f1a nodeName:}" failed. No retries permitted until 2026-01-29 08:53:53.853205882 +0000 UTC m=+914.352794014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics-certs") pod "frr-k8s-99ftr" (UID: "93ad0b89-0d88-4e18-9f8d-4071a5847f1a") : secret "frr-k8s-certs-secret" not found Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.353358 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.354151 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-frr-startup\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.372356 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fef4c25-5a46-45ba-bc17-fe5696028ac9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7pdgn\" (UID: \"4fef4c25-5a46-45ba-bc17-fe5696028ac9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.373034 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5kkl\" (UniqueName: \"kubernetes.io/projected/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-kube-api-access-l5kkl\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.373357 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4tgl\" (UniqueName: \"kubernetes.io/projected/4fef4c25-5a46-45ba-bc17-fe5696028ac9-kube-api-access-x4tgl\") pod \"frr-k8s-webhook-server-7df86c4f6c-7pdgn\" (UID: \"4fef4c25-5a46-45ba-bc17-fe5696028ac9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.454084 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdzgg\" (UniqueName: \"kubernetes.io/projected/d0fae1e4-5509-482f-9430-17a7148dc235-kube-api-access-bdzgg\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.454141 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metallb-excludel2\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.454254 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26thh\" (UniqueName: \"kubernetes.io/projected/28efe09e-8a3b-4a66-8818-18a1bc11b34d-kube-api-access-26thh\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.454296 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.454338 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0fae1e4-5509-482f-9430-17a7148dc235-cert\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.454374 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metrics-certs\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.454393 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0fae1e4-5509-482f-9430-17a7148dc235-metrics-certs\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.454490 5031 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.454615 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metrics-certs podName:28efe09e-8a3b-4a66-8818-18a1bc11b34d nodeName:}" failed. No retries permitted until 2026-01-29 08:53:53.954585779 +0000 UTC m=+914.454173731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metrics-certs") pod "speaker-dsws8" (UID: "28efe09e-8a3b-4a66-8818-18a1bc11b34d") : secret "speaker-certs-secret" not found Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.454632 5031 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.454756 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist podName:28efe09e-8a3b-4a66-8818-18a1bc11b34d nodeName:}" failed. No retries permitted until 2026-01-29 08:53:53.954727342 +0000 UTC m=+914.454315294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist") pod "speaker-dsws8" (UID: "28efe09e-8a3b-4a66-8818-18a1bc11b34d") : secret "metallb-memberlist" not found Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.470956 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.555341 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26thh\" (UniqueName: \"kubernetes.io/projected/28efe09e-8a3b-4a66-8818-18a1bc11b34d-kube-api-access-26thh\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.555450 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0fae1e4-5509-482f-9430-17a7148dc235-cert\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.555494 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0fae1e4-5509-482f-9430-17a7148dc235-metrics-certs\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.555532 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzgg\" (UniqueName: \"kubernetes.io/projected/d0fae1e4-5509-482f-9430-17a7148dc235-kube-api-access-bdzgg\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.555565 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metallb-excludel2\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.556360 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metallb-excludel2\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.558052 5031 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.559935 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0fae1e4-5509-482f-9430-17a7148dc235-metrics-certs\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.570323 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0fae1e4-5509-482f-9430-17a7148dc235-cert\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.575665 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26thh\" (UniqueName: \"kubernetes.io/projected/28efe09e-8a3b-4a66-8818-18a1bc11b34d-kube-api-access-26thh\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.575839 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzgg\" (UniqueName: \"kubernetes.io/projected/d0fae1e4-5509-482f-9430-17a7148dc235-kube-api-access-bdzgg\") pod \"controller-6968d8fdc4-2ls2g\" (UID: \"d0fae1e4-5509-482f-9430-17a7148dc235\") " pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.622110 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.858780 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics-certs\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.863480 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93ad0b89-0d88-4e18-9f8d-4071a5847f1a-metrics-certs\") pod \"frr-k8s-99ftr\" (UID: \"93ad0b89-0d88-4e18-9f8d-4071a5847f1a\") " pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.959528 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.959603 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metrics-certs\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.959650 5031 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 08:53:53 crc kubenswrapper[5031]: E0129 08:53:53.959706 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist podName:28efe09e-8a3b-4a66-8818-18a1bc11b34d nodeName:}" failed. No retries permitted until 2026-01-29 08:53:54.95968973 +0000 UTC m=+915.459277682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist") pod "speaker-dsws8" (UID: "28efe09e-8a3b-4a66-8818-18a1bc11b34d") : secret "metallb-memberlist" not found Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.962621 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-metrics-certs\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:53 crc kubenswrapper[5031]: I0129 08:53:53.983604 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn"] Jan 29 08:53:54 crc kubenswrapper[5031]: I0129 08:53:54.017576 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-2ls2g"] Jan 29 08:53:54 crc kubenswrapper[5031]: I0129 08:53:54.067185 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" event={"ID":"4fef4c25-5a46-45ba-bc17-fe5696028ac9","Type":"ContainerStarted","Data":"7fc738629786f0c07e65bd6b011a884a023e5d360bb841622532685b49ed933f"} Jan 29 08:53:54 crc kubenswrapper[5031]: I0129 08:53:54.068792 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2ls2g" event={"ID":"d0fae1e4-5509-482f-9430-17a7148dc235","Type":"ContainerStarted","Data":"78891cb5c78b58a912f792d87980477fa877b8707d6000ae65f589ba8cd53155"} Jan 29 08:53:54 crc kubenswrapper[5031]: I0129 08:53:54.081277 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-99ftr" Jan 29 08:53:54 crc kubenswrapper[5031]: I0129 08:53:54.972422 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:54 crc kubenswrapper[5031]: I0129 08:53:54.976481 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28efe09e-8a3b-4a66-8818-18a1bc11b34d-memberlist\") pod \"speaker-dsws8\" (UID: \"28efe09e-8a3b-4a66-8818-18a1bc11b34d\") " pod="metallb-system/speaker-dsws8" Jan 29 08:53:55 crc kubenswrapper[5031]: I0129 08:53:55.067173 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dsws8" Jan 29 08:53:55 crc kubenswrapper[5031]: I0129 08:53:55.076248 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2ls2g" event={"ID":"d0fae1e4-5509-482f-9430-17a7148dc235","Type":"ContainerStarted","Data":"74fa1fcbfb368e217507dfaf6664fe80f20a9a9ad62156d7a1cb8acb7ab870ab"} Jan 29 08:53:55 crc kubenswrapper[5031]: I0129 08:53:55.076328 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2ls2g" event={"ID":"d0fae1e4-5509-482f-9430-17a7148dc235","Type":"ContainerStarted","Data":"c70d2e20cc163f0820de5608329c07b03703d330d6778bdba957b763a4ff6498"} Jan 29 08:53:55 crc kubenswrapper[5031]: I0129 08:53:55.077207 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:53:55 crc kubenswrapper[5031]: I0129 08:53:55.077416 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerStarted","Data":"87bda2d6cc68020f46c88e5989ab054acb39cd5536db8c700297934d4b035879"} Jan 29 08:53:55 crc kubenswrapper[5031]: W0129 08:53:55.094169 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28efe09e_8a3b_4a66_8818_18a1bc11b34d.slice/crio-f05a81b59114d831bcd39e077b45934aca3bf6e15e5eedcbec4523de95ccb638 WatchSource:0}: Error finding container f05a81b59114d831bcd39e077b45934aca3bf6e15e5eedcbec4523de95ccb638: Status 404 returned error can't find the container with id f05a81b59114d831bcd39e077b45934aca3bf6e15e5eedcbec4523de95ccb638 Jan 29 08:53:55 crc kubenswrapper[5031]: I0129 08:53:55.095874 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-2ls2g" podStartSLOduration=2.09585804 podStartE2EDuration="2.09585804s" podCreationTimestamp="2026-01-29 08:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:53:55.093398445 +0000 UTC m=+915.592986407" watchObservedRunningTime="2026-01-29 08:53:55.09585804 +0000 UTC m=+915.595445992" Jan 29 08:53:56 crc kubenswrapper[5031]: I0129 08:53:56.099232 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dsws8" event={"ID":"28efe09e-8a3b-4a66-8818-18a1bc11b34d","Type":"ContainerStarted","Data":"afd7eb395e52f93683770af7e3f7f549ee8b02981a1cc51ab17c95576d916dcc"} Jan 29 08:53:56 crc kubenswrapper[5031]: I0129 08:53:56.099607 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dsws8" event={"ID":"28efe09e-8a3b-4a66-8818-18a1bc11b34d","Type":"ContainerStarted","Data":"955ff444a8af37788a6d60945c4542d632d7a4828782c80ef3635951ac7e734e"} Jan 29 08:53:56 crc kubenswrapper[5031]: I0129 08:53:56.099621 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dsws8" event={"ID":"28efe09e-8a3b-4a66-8818-18a1bc11b34d","Type":"ContainerStarted","Data":"f05a81b59114d831bcd39e077b45934aca3bf6e15e5eedcbec4523de95ccb638"} Jan 29 08:53:56 crc kubenswrapper[5031]: I0129 08:53:56.099958 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dsws8" Jan 29 08:53:56 crc kubenswrapper[5031]: I0129 08:53:56.122357 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dsws8" podStartSLOduration=3.122336734 podStartE2EDuration="3.122336734s" podCreationTimestamp="2026-01-29 08:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:53:56.118878642 +0000 UTC m=+916.618466594" watchObservedRunningTime="2026-01-29 08:53:56.122336734 +0000 UTC m=+916.621924686" Jan 29 08:53:57 crc kubenswrapper[5031]: I0129 08:53:57.886509 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q997h"] Jan 29 08:53:57 crc kubenswrapper[5031]: I0129 08:53:57.888163 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:57 crc kubenswrapper[5031]: I0129 08:53:57.905904 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q997h"] Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.025135 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-catalog-content\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.025194 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284jh\" (UniqueName: \"kubernetes.io/projected/8e5efb1f-4f6f-426e-8780-240d68fb8539-kube-api-access-284jh\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.025264 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-utilities\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.126794 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-catalog-content\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.126854 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-284jh\" (UniqueName: \"kubernetes.io/projected/8e5efb1f-4f6f-426e-8780-240d68fb8539-kube-api-access-284jh\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.126915 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-utilities\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.127444 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-utilities\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.128441 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-catalog-content\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.148560 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-284jh\" (UniqueName: \"kubernetes.io/projected/8e5efb1f-4f6f-426e-8780-240d68fb8539-kube-api-access-284jh\") pod \"redhat-marketplace-q997h\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.208031 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:53:58 crc kubenswrapper[5031]: I0129 08:53:58.961508 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q997h"] Jan 29 08:53:58 crc kubenswrapper[5031]: W0129 08:53:58.970806 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e5efb1f_4f6f_426e_8780_240d68fb8539.slice/crio-4ef6d73bb1161eb6eb02b6dd690fb22fd71a505dc3775c63af36829dacc95e6f WatchSource:0}: Error finding container 4ef6d73bb1161eb6eb02b6dd690fb22fd71a505dc3775c63af36829dacc95e6f: Status 404 returned error can't find the container with id 4ef6d73bb1161eb6eb02b6dd690fb22fd71a505dc3775c63af36829dacc95e6f Jan 29 08:53:59 crc kubenswrapper[5031]: I0129 08:53:59.141134 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q997h" event={"ID":"8e5efb1f-4f6f-426e-8780-240d68fb8539","Type":"ContainerStarted","Data":"fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd"} Jan 29 08:53:59 crc kubenswrapper[5031]: I0129 08:53:59.141738 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q997h" event={"ID":"8e5efb1f-4f6f-426e-8780-240d68fb8539","Type":"ContainerStarted","Data":"4ef6d73bb1161eb6eb02b6dd690fb22fd71a505dc3775c63af36829dacc95e6f"} Jan 29 08:54:00 crc kubenswrapper[5031]: I0129 08:54:00.148344 5031 generic.go:334] "Generic (PLEG): container finished" podID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerID="fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd" exitCode=0 Jan 29 08:54:00 crc kubenswrapper[5031]: I0129 08:54:00.148406 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q997h" event={"ID":"8e5efb1f-4f6f-426e-8780-240d68fb8539","Type":"ContainerDied","Data":"fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd"} Jan 29 08:54:04 crc kubenswrapper[5031]: I0129 08:54:04.193006 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" event={"ID":"4fef4c25-5a46-45ba-bc17-fe5696028ac9","Type":"ContainerStarted","Data":"558ad6c72a0c412cd74faa7be7884c4bc993a948589913f325db1b41e56cec35"} Jan 29 08:54:04 crc kubenswrapper[5031]: I0129 08:54:04.193668 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:54:04 crc kubenswrapper[5031]: I0129 08:54:04.194648 5031 generic.go:334] "Generic (PLEG): container finished" podID="93ad0b89-0d88-4e18-9f8d-4071a5847f1a" containerID="89373da67116db57d21675a9a354488a00f898f5fad624f1b496ed2b9eeb99c6" exitCode=0 Jan 29 08:54:04 crc kubenswrapper[5031]: I0129 08:54:04.194686 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerDied","Data":"89373da67116db57d21675a9a354488a00f898f5fad624f1b496ed2b9eeb99c6"} Jan 29 08:54:04 crc kubenswrapper[5031]: I0129 08:54:04.249969 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" podStartSLOduration=1.688015836 podStartE2EDuration="11.249946608s" podCreationTimestamp="2026-01-29 08:53:53 +0000 UTC" firstStartedPulling="2026-01-29 08:53:53.993797034 +0000 UTC m=+914.493384986" lastFinishedPulling="2026-01-29 08:54:03.555727806 +0000 UTC m=+924.055315758" observedRunningTime="2026-01-29 08:54:04.213559024 +0000 UTC m=+924.713146976" watchObservedRunningTime="2026-01-29 08:54:04.249946608 +0000 UTC m=+924.749534560" Jan 29 08:54:05 crc kubenswrapper[5031]: I0129 08:54:05.072883 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dsws8" Jan 29 08:54:05 crc kubenswrapper[5031]: I0129 08:54:05.201982 5031 generic.go:334] "Generic (PLEG): container finished" podID="93ad0b89-0d88-4e18-9f8d-4071a5847f1a" containerID="58e0749b7be9b12a680609ad0f550c85c39f4f0dca5238edfe7ee0afd5588fec" exitCode=0 Jan 29 08:54:05 crc kubenswrapper[5031]: I0129 08:54:05.202094 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerDied","Data":"58e0749b7be9b12a680609ad0f550c85c39f4f0dca5238edfe7ee0afd5588fec"} Jan 29 08:54:05 crc kubenswrapper[5031]: I0129 08:54:05.204940 5031 generic.go:334] "Generic (PLEG): container finished" podID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerID="0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff" exitCode=0 Jan 29 08:54:05 crc kubenswrapper[5031]: I0129 08:54:05.205442 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q997h" event={"ID":"8e5efb1f-4f6f-426e-8780-240d68fb8539","Type":"ContainerDied","Data":"0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff"} Jan 29 08:54:06 crc kubenswrapper[5031]: I0129 08:54:06.213321 5031 generic.go:334] "Generic (PLEG): container finished" podID="93ad0b89-0d88-4e18-9f8d-4071a5847f1a" containerID="9aa59b23df638414ae5783ff9dde4f066be29b8512cfba4618e779486649c8cc" exitCode=0 Jan 29 08:54:06 crc kubenswrapper[5031]: I0129 08:54:06.213379 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerDied","Data":"9aa59b23df638414ae5783ff9dde4f066be29b8512cfba4618e779486649c8cc"} Jan 29 08:54:07 crc kubenswrapper[5031]: I0129 08:54:07.223458 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerStarted","Data":"c936a4b21d7c6e0b060b90ca0017e2850af9d692e70eee30a01babb2b9aa49eb"} Jan 29 08:54:07 crc kubenswrapper[5031]: I0129 08:54:07.223499 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerStarted","Data":"9b614d8b1f2417ca070f4a32dbe260fbf8965219be779a156504f4d57fd34fca"} Jan 29 08:54:07 crc kubenswrapper[5031]: I0129 08:54:07.223510 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerStarted","Data":"af9912034b668d286204fdaf64f85c02f52aaf5e24fc7aed835a2a5a5d14bfc5"} Jan 29 08:54:07 crc kubenswrapper[5031]: I0129 08:54:07.223519 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerStarted","Data":"ded0ba3a3548140b54239e2ea00b0ad87d53d0d3bb5bb09f956ac3c37a39c172"} Jan 29 08:54:07 crc kubenswrapper[5031]: I0129 08:54:07.225516 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q997h" event={"ID":"8e5efb1f-4f6f-426e-8780-240d68fb8539","Type":"ContainerStarted","Data":"72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c"} Jan 29 08:54:07 crc kubenswrapper[5031]: I0129 08:54:07.244553 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q997h" podStartSLOduration=7.493651724 podStartE2EDuration="10.244518963s" podCreationTimestamp="2026-01-29 08:53:57 +0000 UTC" firstStartedPulling="2026-01-29 08:54:03.441580161 +0000 UTC m=+923.941168123" lastFinishedPulling="2026-01-29 08:54:06.19244741 +0000 UTC m=+926.692035362" observedRunningTime="2026-01-29 08:54:07.240271039 +0000 UTC m=+927.739858991" watchObservedRunningTime="2026-01-29 08:54:07.244518963 +0000 UTC m=+927.744106915" Jan 29 08:54:08 crc kubenswrapper[5031]: I0129 08:54:08.208666 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:54:08 crc kubenswrapper[5031]: I0129 08:54:08.209001 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:54:08 crc kubenswrapper[5031]: I0129 08:54:08.239442 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerStarted","Data":"a07e0537885131d35d9a2572df1e1b6f3d7d161c326fe99ec4c8f00376e74b04"} Jan 29 08:54:08 crc kubenswrapper[5031]: I0129 08:54:08.258673 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:54:08 crc kubenswrapper[5031]: I0129 08:54:08.493589 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:54:08 crc kubenswrapper[5031]: I0129 08:54:08.493659 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:54:09 crc kubenswrapper[5031]: I0129 08:54:09.248548 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-99ftr" event={"ID":"93ad0b89-0d88-4e18-9f8d-4071a5847f1a","Type":"ContainerStarted","Data":"e0161412a3390ce808206931370bb19c1f5444770d9f82feccf14a8d819b1031"} Jan 29 08:54:09 crc kubenswrapper[5031]: I0129 08:54:09.248888 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-99ftr" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.082475 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-99ftr" podStartSLOduration=10.733207188 podStartE2EDuration="20.082454936s" podCreationTimestamp="2026-01-29 08:53:53 +0000 UTC" firstStartedPulling="2026-01-29 08:53:54.204096865 +0000 UTC m=+914.703684817" lastFinishedPulling="2026-01-29 08:54:03.553344613 +0000 UTC m=+924.052932565" observedRunningTime="2026-01-29 08:54:09.270107276 +0000 UTC m=+929.769695228" watchObservedRunningTime="2026-01-29 08:54:13.082454936 +0000 UTC m=+933.582042888" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.087510 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-znw6z"] Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.088348 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.089999 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.094877 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.095776 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lj9kr" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.097491 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-znw6z"] Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.197718 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg9t8\" (UniqueName: \"kubernetes.io/projected/d18ce80b-f96c-41a4-80b5-fe959665c78a-kube-api-access-lg9t8\") pod \"openstack-operator-index-znw6z\" (UID: \"d18ce80b-f96c-41a4-80b5-fe959665c78a\") " pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.299132 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg9t8\" (UniqueName: \"kubernetes.io/projected/d18ce80b-f96c-41a4-80b5-fe959665c78a-kube-api-access-lg9t8\") pod \"openstack-operator-index-znw6z\" (UID: \"d18ce80b-f96c-41a4-80b5-fe959665c78a\") " pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.321151 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg9t8\" (UniqueName: \"kubernetes.io/projected/d18ce80b-f96c-41a4-80b5-fe959665c78a-kube-api-access-lg9t8\") pod \"openstack-operator-index-znw6z\" (UID: \"d18ce80b-f96c-41a4-80b5-fe959665c78a\") " pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.405742 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.475887 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7pdgn" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.626250 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-2ls2g" Jan 29 08:54:13 crc kubenswrapper[5031]: I0129 08:54:13.874197 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-znw6z"] Jan 29 08:54:14 crc kubenswrapper[5031]: I0129 08:54:14.082256 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-99ftr" Jan 29 08:54:14 crc kubenswrapper[5031]: I0129 08:54:14.234830 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-99ftr" Jan 29 08:54:14 crc kubenswrapper[5031]: I0129 08:54:14.278644 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-znw6z" event={"ID":"d18ce80b-f96c-41a4-80b5-fe959665c78a","Type":"ContainerStarted","Data":"40bb7689dceb5d88e4a94044280df41eb80478c83db96755872135c3d8ecc546"} Jan 29 08:54:18 crc kubenswrapper[5031]: I0129 08:54:18.256208 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:54:18 crc kubenswrapper[5031]: I0129 08:54:18.307216 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-znw6z" event={"ID":"d18ce80b-f96c-41a4-80b5-fe959665c78a","Type":"ContainerStarted","Data":"8c9c3daa461c383fcbe09c3eef6b65d218c7c093b88c05710767aa0a8bf5de99"} Jan 29 08:54:18 crc kubenswrapper[5031]: I0129 08:54:18.324612 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-znw6z" podStartSLOduration=1.609089872 podStartE2EDuration="5.324591226s" podCreationTimestamp="2026-01-29 08:54:13 +0000 UTC" firstStartedPulling="2026-01-29 08:54:13.877077348 +0000 UTC m=+934.376665290" lastFinishedPulling="2026-01-29 08:54:17.592578692 +0000 UTC m=+938.092166644" observedRunningTime="2026-01-29 08:54:18.319721397 +0000 UTC m=+938.819309369" watchObservedRunningTime="2026-01-29 08:54:18.324591226 +0000 UTC m=+938.824179178" Jan 29 08:54:19 crc kubenswrapper[5031]: I0129 08:54:19.679840 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q997h"] Jan 29 08:54:19 crc kubenswrapper[5031]: I0129 08:54:19.680110 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q997h" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="registry-server" containerID="cri-o://72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c" gracePeriod=2 Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.083335 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.164536 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-284jh\" (UniqueName: \"kubernetes.io/projected/8e5efb1f-4f6f-426e-8780-240d68fb8539-kube-api-access-284jh\") pod \"8e5efb1f-4f6f-426e-8780-240d68fb8539\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.164603 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-utilities\") pod \"8e5efb1f-4f6f-426e-8780-240d68fb8539\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.164742 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-catalog-content\") pod \"8e5efb1f-4f6f-426e-8780-240d68fb8539\" (UID: \"8e5efb1f-4f6f-426e-8780-240d68fb8539\") " Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.165691 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-utilities" (OuterVolumeSpecName: "utilities") pod "8e5efb1f-4f6f-426e-8780-240d68fb8539" (UID: "8e5efb1f-4f6f-426e-8780-240d68fb8539"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.170888 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5efb1f-4f6f-426e-8780-240d68fb8539-kube-api-access-284jh" (OuterVolumeSpecName: "kube-api-access-284jh") pod "8e5efb1f-4f6f-426e-8780-240d68fb8539" (UID: "8e5efb1f-4f6f-426e-8780-240d68fb8539"). InnerVolumeSpecName "kube-api-access-284jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.192808 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e5efb1f-4f6f-426e-8780-240d68fb8539" (UID: "8e5efb1f-4f6f-426e-8780-240d68fb8539"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.266908 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.266955 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-284jh\" (UniqueName: \"kubernetes.io/projected/8e5efb1f-4f6f-426e-8780-240d68fb8539-kube-api-access-284jh\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.266970 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5efb1f-4f6f-426e-8780-240d68fb8539-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.321584 5031 generic.go:334] "Generic (PLEG): container finished" podID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerID="72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c" exitCode=0 Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.321623 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q997h" event={"ID":"8e5efb1f-4f6f-426e-8780-240d68fb8539","Type":"ContainerDied","Data":"72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c"} Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.321881 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q997h" event={"ID":"8e5efb1f-4f6f-426e-8780-240d68fb8539","Type":"ContainerDied","Data":"4ef6d73bb1161eb6eb02b6dd690fb22fd71a505dc3775c63af36829dacc95e6f"} Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.321681 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q997h" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.321946 5031 scope.go:117] "RemoveContainer" containerID="72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.342936 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q997h"] Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.345099 5031 scope.go:117] "RemoveContainer" containerID="0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.351234 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q997h"] Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.373214 5031 scope.go:117] "RemoveContainer" containerID="fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.396924 5031 scope.go:117] "RemoveContainer" containerID="72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c" Jan 29 08:54:20 crc kubenswrapper[5031]: E0129 08:54:20.397342 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c\": container with ID starting with 72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c not found: ID does not exist" containerID="72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.397417 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c"} err="failed to get container status \"72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c\": rpc error: code = NotFound desc = could not find container \"72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c\": container with ID starting with 72de56b9425883b585bb3a1416826b7954e34dd712e9762c87fdcab7bb26ff0c not found: ID does not exist" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.397445 5031 scope.go:117] "RemoveContainer" containerID="0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff" Jan 29 08:54:20 crc kubenswrapper[5031]: E0129 08:54:20.399347 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff\": container with ID starting with 0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff not found: ID does not exist" containerID="0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.399406 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff"} err="failed to get container status \"0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff\": rpc error: code = NotFound desc = could not find container \"0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff\": container with ID starting with 0515eaa3ee5e5d06e846074b92fb20e3a02c0ebdf90036fb4ab843eeb95c56ff not found: ID does not exist" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.399424 5031 scope.go:117] "RemoveContainer" containerID="fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd" Jan 29 08:54:20 crc kubenswrapper[5031]: E0129 08:54:20.400188 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd\": container with ID starting with fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd not found: ID does not exist" containerID="fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd" Jan 29 08:54:20 crc kubenswrapper[5031]: I0129 08:54:20.400345 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd"} err="failed to get container status \"fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd\": rpc error: code = NotFound desc = could not find container \"fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd\": container with ID starting with fac5465d1637c4c9348195dad7fc31aaf90d5d06a5d2fe184117b6ff237677cd not found: ID does not exist" Jan 29 08:54:22 crc kubenswrapper[5031]: I0129 08:54:22.290012 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" path="/var/lib/kubelet/pods/8e5efb1f-4f6f-426e-8780-240d68fb8539/volumes" Jan 29 08:54:23 crc kubenswrapper[5031]: I0129 08:54:23.406305 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:23 crc kubenswrapper[5031]: I0129 08:54:23.406760 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:23 crc kubenswrapper[5031]: I0129 08:54:23.434817 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:24 crc kubenswrapper[5031]: I0129 08:54:24.085122 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-99ftr" Jan 29 08:54:24 crc kubenswrapper[5031]: I0129 08:54:24.368858 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-znw6z" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.138820 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959"] Jan 29 08:54:27 crc kubenswrapper[5031]: E0129 08:54:27.139382 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="extract-utilities" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.139394 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="extract-utilities" Jan 29 08:54:27 crc kubenswrapper[5031]: E0129 08:54:27.139417 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="extract-content" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.139423 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="extract-content" Jan 29 08:54:27 crc kubenswrapper[5031]: E0129 08:54:27.139431 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="registry-server" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.139437 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="registry-server" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.139535 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5efb1f-4f6f-426e-8780-240d68fb8539" containerName="registry-server" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.140519 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.147754 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fdnz8" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.152389 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959"] Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.269151 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-bundle\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.269233 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-util\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.269259 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfcx8\" (UniqueName: \"kubernetes.io/projected/fa518afd-4138-4e05-9b66-939dc9fea8d1-kube-api-access-nfcx8\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.371078 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-util\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.371117 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfcx8\" (UniqueName: \"kubernetes.io/projected/fa518afd-4138-4e05-9b66-939dc9fea8d1-kube-api-access-nfcx8\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.371205 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-bundle\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.371672 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-bundle\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.371673 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-util\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.393154 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfcx8\" (UniqueName: \"kubernetes.io/projected/fa518afd-4138-4e05-9b66-939dc9fea8d1-kube-api-access-nfcx8\") pod \"7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.463944 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:27 crc kubenswrapper[5031]: I0129 08:54:27.780124 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959"] Jan 29 08:54:27 crc kubenswrapper[5031]: W0129 08:54:27.789583 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa518afd_4138_4e05_9b66_939dc9fea8d1.slice/crio-6d9fb5652085f9950f1a5b3eae71cee86c0dbc05e9e5b4ab73e9325fd500a304 WatchSource:0}: Error finding container 6d9fb5652085f9950f1a5b3eae71cee86c0dbc05e9e5b4ab73e9325fd500a304: Status 404 returned error can't find the container with id 6d9fb5652085f9950f1a5b3eae71cee86c0dbc05e9e5b4ab73e9325fd500a304 Jan 29 08:54:28 crc kubenswrapper[5031]: I0129 08:54:28.375409 5031 generic.go:334] "Generic (PLEG): container finished" podID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerID="c136e719326452d68c3ac0f8d229e13b0cf6ee5c99125c48c039f699cb41bd90" exitCode=0 Jan 29 08:54:28 crc kubenswrapper[5031]: I0129 08:54:28.375455 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" event={"ID":"fa518afd-4138-4e05-9b66-939dc9fea8d1","Type":"ContainerDied","Data":"c136e719326452d68c3ac0f8d229e13b0cf6ee5c99125c48c039f699cb41bd90"} Jan 29 08:54:28 crc kubenswrapper[5031]: I0129 08:54:28.375485 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" event={"ID":"fa518afd-4138-4e05-9b66-939dc9fea8d1","Type":"ContainerStarted","Data":"6d9fb5652085f9950f1a5b3eae71cee86c0dbc05e9e5b4ab73e9325fd500a304"} Jan 29 08:54:30 crc kubenswrapper[5031]: I0129 08:54:30.388249 5031 generic.go:334] "Generic (PLEG): container finished" podID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerID="2b09f2f1b07174fe7a6675f9523279f467f10a055181560bc570ae70d1c933d0" exitCode=0 Jan 29 08:54:30 crc kubenswrapper[5031]: I0129 08:54:30.388352 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" event={"ID":"fa518afd-4138-4e05-9b66-939dc9fea8d1","Type":"ContainerDied","Data":"2b09f2f1b07174fe7a6675f9523279f467f10a055181560bc570ae70d1c933d0"} Jan 29 08:54:30 crc kubenswrapper[5031]: E0129 08:54:30.840148 5031 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa518afd_4138_4e05_9b66_939dc9fea8d1.slice/crio-conmon-3c85a804b9745c15cedcf7ca20e566c44173e04fe72bda0c383025a1a3d38a69.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:54:31 crc kubenswrapper[5031]: I0129 08:54:31.396461 5031 generic.go:334] "Generic (PLEG): container finished" podID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerID="3c85a804b9745c15cedcf7ca20e566c44173e04fe72bda0c383025a1a3d38a69" exitCode=0 Jan 29 08:54:31 crc kubenswrapper[5031]: I0129 08:54:31.396514 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" event={"ID":"fa518afd-4138-4e05-9b66-939dc9fea8d1","Type":"ContainerDied","Data":"3c85a804b9745c15cedcf7ca20e566c44173e04fe72bda0c383025a1a3d38a69"} Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.645568 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.744440 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-bundle\") pod \"fa518afd-4138-4e05-9b66-939dc9fea8d1\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.744507 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-util\") pod \"fa518afd-4138-4e05-9b66-939dc9fea8d1\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.744638 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfcx8\" (UniqueName: \"kubernetes.io/projected/fa518afd-4138-4e05-9b66-939dc9fea8d1-kube-api-access-nfcx8\") pod \"fa518afd-4138-4e05-9b66-939dc9fea8d1\" (UID: \"fa518afd-4138-4e05-9b66-939dc9fea8d1\") " Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.745497 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-bundle" (OuterVolumeSpecName: "bundle") pod "fa518afd-4138-4e05-9b66-939dc9fea8d1" (UID: "fa518afd-4138-4e05-9b66-939dc9fea8d1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.751117 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa518afd-4138-4e05-9b66-939dc9fea8d1-kube-api-access-nfcx8" (OuterVolumeSpecName: "kube-api-access-nfcx8") pod "fa518afd-4138-4e05-9b66-939dc9fea8d1" (UID: "fa518afd-4138-4e05-9b66-939dc9fea8d1"). InnerVolumeSpecName "kube-api-access-nfcx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.775032 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-util" (OuterVolumeSpecName: "util") pod "fa518afd-4138-4e05-9b66-939dc9fea8d1" (UID: "fa518afd-4138-4e05-9b66-939dc9fea8d1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.845915 5031 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.845947 5031 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa518afd-4138-4e05-9b66-939dc9fea8d1-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:32 crc kubenswrapper[5031]: I0129 08:54:32.845957 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfcx8\" (UniqueName: \"kubernetes.io/projected/fa518afd-4138-4e05-9b66-939dc9fea8d1-kube-api-access-nfcx8\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:33 crc kubenswrapper[5031]: I0129 08:54:33.407817 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" event={"ID":"fa518afd-4138-4e05-9b66-939dc9fea8d1","Type":"ContainerDied","Data":"6d9fb5652085f9950f1a5b3eae71cee86c0dbc05e9e5b4ab73e9325fd500a304"} Jan 29 08:54:33 crc kubenswrapper[5031]: I0129 08:54:33.408031 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d9fb5652085f9950f1a5b3eae71cee86c0dbc05e9e5b4ab73e9325fd500a304" Jan 29 08:54:33 crc kubenswrapper[5031]: I0129 08:54:33.407913 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.359199 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7"] Jan 29 08:54:36 crc kubenswrapper[5031]: E0129 08:54:36.360790 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerName="pull" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.360875 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerName="pull" Jan 29 08:54:36 crc kubenswrapper[5031]: E0129 08:54:36.360946 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerName="extract" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.361019 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerName="extract" Jan 29 08:54:36 crc kubenswrapper[5031]: E0129 08:54:36.361087 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerName="util" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.361136 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerName="util" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.361284 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa518afd-4138-4e05-9b66-939dc9fea8d1" containerName="extract" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.361800 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.365729 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sh7f8" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.379937 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7"] Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.493735 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsk9g\" (UniqueName: \"kubernetes.io/projected/9d3b6973-ffdd-445f-b03f-3783ff2c3159-kube-api-access-wsk9g\") pod \"openstack-operator-controller-init-694c86d6f5-8tvx7\" (UID: \"9d3b6973-ffdd-445f-b03f-3783ff2c3159\") " pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.595431 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsk9g\" (UniqueName: \"kubernetes.io/projected/9d3b6973-ffdd-445f-b03f-3783ff2c3159-kube-api-access-wsk9g\") pod \"openstack-operator-controller-init-694c86d6f5-8tvx7\" (UID: \"9d3b6973-ffdd-445f-b03f-3783ff2c3159\") " pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.613748 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsk9g\" (UniqueName: \"kubernetes.io/projected/9d3b6973-ffdd-445f-b03f-3783ff2c3159-kube-api-access-wsk9g\") pod \"openstack-operator-controller-init-694c86d6f5-8tvx7\" (UID: \"9d3b6973-ffdd-445f-b03f-3783ff2c3159\") " pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" Jan 29 08:54:36 crc kubenswrapper[5031]: I0129 08:54:36.689755 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" Jan 29 08:54:37 crc kubenswrapper[5031]: I0129 08:54:37.145541 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7"] Jan 29 08:54:37 crc kubenswrapper[5031]: I0129 08:54:37.435559 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" event={"ID":"9d3b6973-ffdd-445f-b03f-3783ff2c3159","Type":"ContainerStarted","Data":"73c018f4500ceeef92d24f52e1e428775ffe79f586489dd594602a19c97fb6cd"} Jan 29 08:54:38 crc kubenswrapper[5031]: I0129 08:54:38.493794 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:54:38 crc kubenswrapper[5031]: I0129 08:54:38.493864 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:54:38 crc kubenswrapper[5031]: I0129 08:54:38.493914 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:54:38 crc kubenswrapper[5031]: I0129 08:54:38.494591 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16b92f6fdefb0958d7a7c20f1e33caf653c7a4682955f7b154681a53ac8f22bb"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:54:38 crc kubenswrapper[5031]: I0129 08:54:38.494664 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://16b92f6fdefb0958d7a7c20f1e33caf653c7a4682955f7b154681a53ac8f22bb" gracePeriod=600 Jan 29 08:54:39 crc kubenswrapper[5031]: I0129 08:54:39.449965 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="16b92f6fdefb0958d7a7c20f1e33caf653c7a4682955f7b154681a53ac8f22bb" exitCode=0 Jan 29 08:54:39 crc kubenswrapper[5031]: I0129 08:54:39.450012 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"16b92f6fdefb0958d7a7c20f1e33caf653c7a4682955f7b154681a53ac8f22bb"} Jan 29 08:54:39 crc kubenswrapper[5031]: I0129 08:54:39.450043 5031 scope.go:117] "RemoveContainer" containerID="603385108d4da3e63146c528ce05dcdbfcafcb208168a4663a80e4ba28e126b1" Jan 29 08:54:43 crc kubenswrapper[5031]: I0129 08:54:43.473967 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" event={"ID":"9d3b6973-ffdd-445f-b03f-3783ff2c3159","Type":"ContainerStarted","Data":"21e06460b4afb5e9c01cf6a46c3ef13baab31e156addfd66cba2a25737ebf63f"} Jan 29 08:54:43 crc kubenswrapper[5031]: I0129 08:54:43.475705 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" Jan 29 08:54:43 crc kubenswrapper[5031]: I0129 08:54:43.477333 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"e25b3544ed82f73d3e69370fae71f9310174a457f060c5ae77619bf418f1fb57"} Jan 29 08:54:43 crc kubenswrapper[5031]: I0129 08:54:43.515180 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" podStartSLOduration=1.588685165 podStartE2EDuration="7.515159197s" podCreationTimestamp="2026-01-29 08:54:36 +0000 UTC" firstStartedPulling="2026-01-29 08:54:37.148098464 +0000 UTC m=+957.647686416" lastFinishedPulling="2026-01-29 08:54:43.074572486 +0000 UTC m=+963.574160448" observedRunningTime="2026-01-29 08:54:43.507618008 +0000 UTC m=+964.007205960" watchObservedRunningTime="2026-01-29 08:54:43.515159197 +0000 UTC m=+964.014747149" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.495017 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7ccgq"] Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.497099 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.508106 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ccgq"] Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.599006 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-catalog-content\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.599309 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm6tz\" (UniqueName: \"kubernetes.io/projected/968ddc45-3fba-40f8-b64c-09213c30a673-kube-api-access-tm6tz\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.599527 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-utilities\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.700462 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-utilities\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.700882 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-catalog-content\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.701003 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm6tz\" (UniqueName: \"kubernetes.io/projected/968ddc45-3fba-40f8-b64c-09213c30a673-kube-api-access-tm6tz\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.701091 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-utilities\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.701327 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-catalog-content\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.724075 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm6tz\" (UniqueName: \"kubernetes.io/projected/968ddc45-3fba-40f8-b64c-09213c30a673-kube-api-access-tm6tz\") pod \"certified-operators-7ccgq\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:46 crc kubenswrapper[5031]: I0129 08:54:46.813947 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:47 crc kubenswrapper[5031]: I0129 08:54:47.487287 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ccgq"] Jan 29 08:54:47 crc kubenswrapper[5031]: W0129 08:54:47.488267 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod968ddc45_3fba_40f8_b64c_09213c30a673.slice/crio-fcda9522d9ab8d78197c63604d45043d0ccc1aeb0a47317accfd659dccde6ccf WatchSource:0}: Error finding container fcda9522d9ab8d78197c63604d45043d0ccc1aeb0a47317accfd659dccde6ccf: Status 404 returned error can't find the container with id fcda9522d9ab8d78197c63604d45043d0ccc1aeb0a47317accfd659dccde6ccf Jan 29 08:54:47 crc kubenswrapper[5031]: I0129 08:54:47.500666 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ccgq" event={"ID":"968ddc45-3fba-40f8-b64c-09213c30a673","Type":"ContainerStarted","Data":"fcda9522d9ab8d78197c63604d45043d0ccc1aeb0a47317accfd659dccde6ccf"} Jan 29 08:54:48 crc kubenswrapper[5031]: I0129 08:54:48.508822 5031 generic.go:334] "Generic (PLEG): container finished" podID="968ddc45-3fba-40f8-b64c-09213c30a673" containerID="a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1" exitCode=0 Jan 29 08:54:48 crc kubenswrapper[5031]: I0129 08:54:48.508915 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ccgq" event={"ID":"968ddc45-3fba-40f8-b64c-09213c30a673","Type":"ContainerDied","Data":"a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1"} Jan 29 08:54:50 crc kubenswrapper[5031]: I0129 08:54:50.529023 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ccgq" event={"ID":"968ddc45-3fba-40f8-b64c-09213c30a673","Type":"ContainerStarted","Data":"0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da"} Jan 29 08:54:51 crc kubenswrapper[5031]: E0129 08:54:51.136717 5031 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod968ddc45_3fba_40f8_b64c_09213c30a673.slice/crio-0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:54:51 crc kubenswrapper[5031]: I0129 08:54:51.536388 5031 generic.go:334] "Generic (PLEG): container finished" podID="968ddc45-3fba-40f8-b64c-09213c30a673" containerID="0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da" exitCode=0 Jan 29 08:54:51 crc kubenswrapper[5031]: I0129 08:54:51.536465 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ccgq" event={"ID":"968ddc45-3fba-40f8-b64c-09213c30a673","Type":"ContainerDied","Data":"0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da"} Jan 29 08:54:52 crc kubenswrapper[5031]: I0129 08:54:52.543743 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ccgq" event={"ID":"968ddc45-3fba-40f8-b64c-09213c30a673","Type":"ContainerStarted","Data":"99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088"} Jan 29 08:54:52 crc kubenswrapper[5031]: I0129 08:54:52.564696 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7ccgq" podStartSLOduration=2.883725826 podStartE2EDuration="6.564683176s" podCreationTimestamp="2026-01-29 08:54:46 +0000 UTC" firstStartedPulling="2026-01-29 08:54:48.511337611 +0000 UTC m=+969.010925563" lastFinishedPulling="2026-01-29 08:54:52.192294961 +0000 UTC m=+972.691882913" observedRunningTime="2026-01-29 08:54:52.559795636 +0000 UTC m=+973.059383588" watchObservedRunningTime="2026-01-29 08:54:52.564683176 +0000 UTC m=+973.064271128" Jan 29 08:54:56 crc kubenswrapper[5031]: I0129 08:54:56.692828 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-694c86d6f5-8tvx7" Jan 29 08:54:56 crc kubenswrapper[5031]: I0129 08:54:56.814963 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:56 crc kubenswrapper[5031]: I0129 08:54:56.815025 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:56 crc kubenswrapper[5031]: I0129 08:54:56.876336 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:57 crc kubenswrapper[5031]: I0129 08:54:57.637692 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:54:57 crc kubenswrapper[5031]: I0129 08:54:57.683830 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ccgq"] Jan 29 08:54:59 crc kubenswrapper[5031]: I0129 08:54:59.584693 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7ccgq" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="registry-server" containerID="cri-o://99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088" gracePeriod=2 Jan 29 08:54:59 crc kubenswrapper[5031]: I0129 08:54:59.955514 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.045033 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-catalog-content\") pod \"968ddc45-3fba-40f8-b64c-09213c30a673\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.046566 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm6tz\" (UniqueName: \"kubernetes.io/projected/968ddc45-3fba-40f8-b64c-09213c30a673-kube-api-access-tm6tz\") pod \"968ddc45-3fba-40f8-b64c-09213c30a673\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.046699 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-utilities\") pod \"968ddc45-3fba-40f8-b64c-09213c30a673\" (UID: \"968ddc45-3fba-40f8-b64c-09213c30a673\") " Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.047779 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-utilities" (OuterVolumeSpecName: "utilities") pod "968ddc45-3fba-40f8-b64c-09213c30a673" (UID: "968ddc45-3fba-40f8-b64c-09213c30a673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.056484 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968ddc45-3fba-40f8-b64c-09213c30a673-kube-api-access-tm6tz" (OuterVolumeSpecName: "kube-api-access-tm6tz") pod "968ddc45-3fba-40f8-b64c-09213c30a673" (UID: "968ddc45-3fba-40f8-b64c-09213c30a673"). InnerVolumeSpecName "kube-api-access-tm6tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.100920 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "968ddc45-3fba-40f8-b64c-09213c30a673" (UID: "968ddc45-3fba-40f8-b64c-09213c30a673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.148830 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.148865 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968ddc45-3fba-40f8-b64c-09213c30a673-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.148876 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm6tz\" (UniqueName: \"kubernetes.io/projected/968ddc45-3fba-40f8-b64c-09213c30a673-kube-api-access-tm6tz\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.593208 5031 generic.go:334] "Generic (PLEG): container finished" podID="968ddc45-3fba-40f8-b64c-09213c30a673" containerID="99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088" exitCode=0 Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.593251 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ccgq" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.593271 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ccgq" event={"ID":"968ddc45-3fba-40f8-b64c-09213c30a673","Type":"ContainerDied","Data":"99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088"} Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.593655 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ccgq" event={"ID":"968ddc45-3fba-40f8-b64c-09213c30a673","Type":"ContainerDied","Data":"fcda9522d9ab8d78197c63604d45043d0ccc1aeb0a47317accfd659dccde6ccf"} Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.593685 5031 scope.go:117] "RemoveContainer" containerID="99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.616995 5031 scope.go:117] "RemoveContainer" containerID="0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.625955 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ccgq"] Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.638327 5031 scope.go:117] "RemoveContainer" containerID="a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.640016 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7ccgq"] Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.666672 5031 scope.go:117] "RemoveContainer" containerID="99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088" Jan 29 08:55:00 crc kubenswrapper[5031]: E0129 08:55:00.667149 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088\": container with ID starting with 99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088 not found: ID does not exist" containerID="99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.667201 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088"} err="failed to get container status \"99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088\": rpc error: code = NotFound desc = could not find container \"99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088\": container with ID starting with 99eb11fe4b5cd7c56d8c4eae63203c6b30946f3c80befc972bd88119ec77b088 not found: ID does not exist" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.667230 5031 scope.go:117] "RemoveContainer" containerID="0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da" Jan 29 08:55:00 crc kubenswrapper[5031]: E0129 08:55:00.669559 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da\": container with ID starting with 0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da not found: ID does not exist" containerID="0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.669608 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da"} err="failed to get container status \"0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da\": rpc error: code = NotFound desc = could not find container \"0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da\": container with ID starting with 0e8d6a62411938ee4f399951e9506aa1b048d19444f340bbd6426bfe98e697da not found: ID does not exist" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.669639 5031 scope.go:117] "RemoveContainer" containerID="a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1" Jan 29 08:55:00 crc kubenswrapper[5031]: E0129 08:55:00.670009 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1\": container with ID starting with a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1 not found: ID does not exist" containerID="a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1" Jan 29 08:55:00 crc kubenswrapper[5031]: I0129 08:55:00.670037 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1"} err="failed to get container status \"a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1\": rpc error: code = NotFound desc = could not find container \"a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1\": container with ID starting with a1c3a8a66e2da3c038ebd297ffd2781f302a95597fd4903e0f45100ddc8f0be1 not found: ID does not exist" Jan 29 08:55:02 crc kubenswrapper[5031]: I0129 08:55:02.290116 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" path="/var/lib/kubelet/pods/968ddc45-3fba-40f8-b64c-09213c30a673/volumes" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.818838 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq"] Jan 29 08:55:14 crc kubenswrapper[5031]: E0129 08:55:14.819730 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="registry-server" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.819747 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="registry-server" Jan 29 08:55:14 crc kubenswrapper[5031]: E0129 08:55:14.819771 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="extract-utilities" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.819779 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="extract-utilities" Jan 29 08:55:14 crc kubenswrapper[5031]: E0129 08:55:14.819790 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="extract-content" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.819799 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="extract-content" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.819951 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="968ddc45-3fba-40f8-b64c-09213c30a673" containerName="registry-server" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.820489 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.826878 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rzwzl" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.829112 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.829943 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.831746 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5n52s" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.839391 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.849324 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.858804 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.859772 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.861903 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5pp6c" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.898197 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.899282 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.904693 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rhnjk" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.904922 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.911618 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqjg\" (UniqueName: \"kubernetes.io/projected/a1850026-d710-4da7-883b-1b7149900523-kube-api-access-gkqjg\") pod \"cinder-operator-controller-manager-f6487bd57-mppwm\" (UID: \"a1850026-d710-4da7-883b-1b7149900523\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.921336 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.939781 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw"] Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.940829 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" Jan 29 08:55:14 crc kubenswrapper[5031]: I0129 08:55:14.947361 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-b7qdl" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.011501 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.018440 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4blcc\" (UniqueName: \"kubernetes.io/projected/59d726a8-dfae-47c6-a479-682b32601f3b-kube-api-access-4blcc\") pod \"designate-operator-controller-manager-66dfbd6f5d-f5hc7\" (UID: \"59d726a8-dfae-47c6-a479-682b32601f3b\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.018529 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6zd7\" (UniqueName: \"kubernetes.io/projected/9d7a2eca-248d-464e-b698-5f4daee374d3-kube-api-access-h6zd7\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-6pqwq\" (UID: \"9d7a2eca-248d-464e-b698-5f4daee374d3\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.018564 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ths8\" (UniqueName: \"kubernetes.io/projected/6b581b93-53b8-4bda-a3bc-7ab837f7aec3-kube-api-access-2ths8\") pod \"glance-operator-controller-manager-7857f788f-x5hq5\" (UID: \"6b581b93-53b8-4bda-a3bc-7ab837f7aec3\") " pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.018602 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkqjg\" (UniqueName: \"kubernetes.io/projected/a1850026-d710-4da7-883b-1b7149900523-kube-api-access-gkqjg\") pod \"cinder-operator-controller-manager-f6487bd57-mppwm\" (UID: \"a1850026-d710-4da7-883b-1b7149900523\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.019659 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.021046 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.027440 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.029846 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4k2dx" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.072438 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.073452 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.081455 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.082347 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.085172 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.088866 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkqjg\" (UniqueName: \"kubernetes.io/projected/a1850026-d710-4da7-883b-1b7149900523-kube-api-access-gkqjg\") pod \"cinder-operator-controller-manager-f6487bd57-mppwm\" (UID: \"a1850026-d710-4da7-883b-1b7149900523\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.089270 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.089742 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-88zf5" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.094436 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.114861 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-h6z8s" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.120087 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ww8\" (UniqueName: \"kubernetes.io/projected/fef04ed6-9416-4599-a960-cde56635da29-kube-api-access-f8ww8\") pod \"heat-operator-controller-manager-587c6bfdcf-tt4jw\" (UID: \"fef04ed6-9416-4599-a960-cde56635da29\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.120193 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4blcc\" (UniqueName: \"kubernetes.io/projected/59d726a8-dfae-47c6-a479-682b32601f3b-kube-api-access-4blcc\") pod \"designate-operator-controller-manager-66dfbd6f5d-f5hc7\" (UID: \"59d726a8-dfae-47c6-a479-682b32601f3b\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.120265 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6zd7\" (UniqueName: \"kubernetes.io/projected/9d7a2eca-248d-464e-b698-5f4daee374d3-kube-api-access-h6zd7\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-6pqwq\" (UID: \"9d7a2eca-248d-464e-b698-5f4daee374d3\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.120297 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ths8\" (UniqueName: \"kubernetes.io/projected/6b581b93-53b8-4bda-a3bc-7ab837f7aec3-kube-api-access-2ths8\") pod \"glance-operator-controller-manager-7857f788f-x5hq5\" (UID: \"6b581b93-53b8-4bda-a3bc-7ab837f7aec3\") " pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.150322 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4blcc\" (UniqueName: \"kubernetes.io/projected/59d726a8-dfae-47c6-a479-682b32601f3b-kube-api-access-4blcc\") pod \"designate-operator-controller-manager-66dfbd6f5d-f5hc7\" (UID: \"59d726a8-dfae-47c6-a479-682b32601f3b\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.154690 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.155949 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ths8\" (UniqueName: \"kubernetes.io/projected/6b581b93-53b8-4bda-a3bc-7ab837f7aec3-kube-api-access-2ths8\") pod \"glance-operator-controller-manager-7857f788f-x5hq5\" (UID: \"6b581b93-53b8-4bda-a3bc-7ab837f7aec3\") " pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.157485 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.158644 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.163785 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-pdpjt" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.170850 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6zd7\" (UniqueName: \"kubernetes.io/projected/9d7a2eca-248d-464e-b698-5f4daee374d3-kube-api-access-h6zd7\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-6pqwq\" (UID: \"9d7a2eca-248d-464e-b698-5f4daee374d3\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.174034 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.191584 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-9nxrk"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.192397 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.198001 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-w2gt9" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.231853 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfsh\" (UniqueName: \"kubernetes.io/projected/7771acfe-a081-49f6-afa7-79c7436486b4-kube-api-access-2nfsh\") pod \"ironic-operator-controller-manager-958664b5-tpj2j\" (UID: \"7771acfe-a081-49f6-afa7-79c7436486b4\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.231920 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.231950 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbwj\" (UniqueName: \"kubernetes.io/projected/5b5b3ff2-7c9d-412e-8eef-a203c3096694-kube-api-access-mvbwj\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.232007 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6n68\" (UniqueName: \"kubernetes.io/projected/911c19b6-72d1-4363-bae0-02bb5290a0c3-kube-api-access-q6n68\") pod \"horizon-operator-controller-manager-5fb775575f-ftmh8\" (UID: \"911c19b6-72d1-4363-bae0-02bb5290a0c3\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.232037 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8ww8\" (UniqueName: \"kubernetes.io/projected/fef04ed6-9416-4599-a960-cde56635da29-kube-api-access-f8ww8\") pod \"heat-operator-controller-manager-587c6bfdcf-tt4jw\" (UID: \"fef04ed6-9416-4599-a960-cde56635da29\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.232515 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.237466 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-9nxrk"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.268419 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.274055 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8ww8\" (UniqueName: \"kubernetes.io/projected/fef04ed6-9416-4599-a960-cde56635da29-kube-api-access-f8ww8\") pod \"heat-operator-controller-manager-587c6bfdcf-tt4jw\" (UID: \"fef04ed6-9416-4599-a960-cde56635da29\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.294637 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.295480 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.309677 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2ldqq" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.335704 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.336547 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfsh\" (UniqueName: \"kubernetes.io/projected/7771acfe-a081-49f6-afa7-79c7436486b4-kube-api-access-2nfsh\") pod \"ironic-operator-controller-manager-958664b5-tpj2j\" (UID: \"7771acfe-a081-49f6-afa7-79c7436486b4\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.336582 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.336602 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbwj\" (UniqueName: \"kubernetes.io/projected/5b5b3ff2-7c9d-412e-8eef-a203c3096694-kube-api-access-mvbwj\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.336634 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6p49\" (UniqueName: \"kubernetes.io/projected/8a42f832-5088-4110-a8a9-cc3203ea4677-kube-api-access-t6p49\") pod \"keystone-operator-controller-manager-6978b79747-zhkh2\" (UID: \"8a42f832-5088-4110-a8a9-cc3203ea4677\") " pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.336665 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwhf9\" (UniqueName: \"kubernetes.io/projected/3828c08a-7f8d-4d56-8aad-9fb6a7ce294a-kube-api-access-xwhf9\") pod \"manila-operator-controller-manager-765668569f-9nxrk\" (UID: \"3828c08a-7f8d-4d56-8aad-9fb6a7ce294a\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.336689 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6n68\" (UniqueName: \"kubernetes.io/projected/911c19b6-72d1-4363-bae0-02bb5290a0c3-kube-api-access-q6n68\") pod \"horizon-operator-controller-manager-5fb775575f-ftmh8\" (UID: \"911c19b6-72d1-4363-bae0-02bb5290a0c3\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" Jan 29 08:55:15 crc kubenswrapper[5031]: E0129 08:55:15.337449 5031 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:15 crc kubenswrapper[5031]: E0129 08:55:15.337496 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert podName:5b5b3ff2-7c9d-412e-8eef-a203c3096694 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:15.837477564 +0000 UTC m=+996.337065516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert") pod "infra-operator-controller-manager-79955696d6-8dpt8" (UID: "5b5b3ff2-7c9d-412e-8eef-a203c3096694") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.339443 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.340500 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.380925 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.414994 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.415866 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.428415 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.441465 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4pch\" (UniqueName: \"kubernetes.io/projected/b0b4b733-caa0-46a2-854a-0a96d676fe86-kube-api-access-s4pch\") pod \"mariadb-operator-controller-manager-67bf948998-r6hlv\" (UID: \"b0b4b733-caa0-46a2-854a-0a96d676fe86\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.441526 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6p49\" (UniqueName: \"kubernetes.io/projected/8a42f832-5088-4110-a8a9-cc3203ea4677-kube-api-access-t6p49\") pod \"keystone-operator-controller-manager-6978b79747-zhkh2\" (UID: \"8a42f832-5088-4110-a8a9-cc3203ea4677\") " pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.441552 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwhf9\" (UniqueName: \"kubernetes.io/projected/3828c08a-7f8d-4d56-8aad-9fb6a7ce294a-kube-api-access-xwhf9\") pod \"manila-operator-controller-manager-765668569f-9nxrk\" (UID: \"3828c08a-7f8d-4d56-8aad-9fb6a7ce294a\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.442864 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.453257 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.454290 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.486022 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.502934 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2qbc6" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.503632 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-5fsp9" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.511517 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cg2qb" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.528501 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.529867 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.547631 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jwbs2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.547705 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.548705 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.549047 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8zhl\" (UniqueName: \"kubernetes.io/projected/5925efab-b140-47f9-9b05-309973965161-kube-api-access-f8zhl\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.549265 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.549342 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkr29\" (UniqueName: \"kubernetes.io/projected/652f139c-6f12-42e1-88e8-fef00b383015-kube-api-access-dkr29\") pod \"octavia-operator-controller-manager-b6c99d9c5-pppjk\" (UID: \"652f139c-6f12-42e1-88e8-fef00b383015\") " pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.549435 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4pch\" (UniqueName: \"kubernetes.io/projected/b0b4b733-caa0-46a2-854a-0a96d676fe86-kube-api-access-s4pch\") pod \"mariadb-operator-controller-manager-67bf948998-r6hlv\" (UID: \"b0b4b733-caa0-46a2-854a-0a96d676fe86\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.549465 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7pb5\" (UniqueName: \"kubernetes.io/projected/b7af41a8-c82f-4e03-b775-ad36d931b8c5-kube-api-access-k7pb5\") pod \"nova-operator-controller-manager-ddcbfd695-hhbpv\" (UID: \"b7af41a8-c82f-4e03-b775-ad36d931b8c5\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.549495 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppzl\" (UniqueName: \"kubernetes.io/projected/4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6-kube-api-access-zppzl\") pod \"neutron-operator-controller-manager-694c5bfc85-ltbs2\" (UID: \"4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.553633 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.556815 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.557532 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.557629 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6n68\" (UniqueName: \"kubernetes.io/projected/911c19b6-72d1-4363-bae0-02bb5290a0c3-kube-api-access-q6n68\") pod \"horizon-operator-controller-manager-5fb775575f-ftmh8\" (UID: \"911c19b6-72d1-4363-bae0-02bb5290a0c3\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.570703 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.591024 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-cjpkc" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.591248 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.591375 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-srgkd" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.600925 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.601009 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.650889 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkr29\" (UniqueName: \"kubernetes.io/projected/652f139c-6f12-42e1-88e8-fef00b383015-kube-api-access-dkr29\") pod \"octavia-operator-controller-manager-b6c99d9c5-pppjk\" (UID: \"652f139c-6f12-42e1-88e8-fef00b383015\") " pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.651183 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7pb5\" (UniqueName: \"kubernetes.io/projected/b7af41a8-c82f-4e03-b775-ad36d931b8c5-kube-api-access-k7pb5\") pod \"nova-operator-controller-manager-ddcbfd695-hhbpv\" (UID: \"b7af41a8-c82f-4e03-b775-ad36d931b8c5\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.651208 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppzl\" (UniqueName: \"kubernetes.io/projected/4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6-kube-api-access-zppzl\") pod \"neutron-operator-controller-manager-694c5bfc85-ltbs2\" (UID: \"4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.651261 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8zhl\" (UniqueName: \"kubernetes.io/projected/5925efab-b140-47f9-9b05-309973965161-kube-api-access-f8zhl\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.651303 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.654152 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" Jan 29 08:55:15 crc kubenswrapper[5031]: E0129 08:55:15.658375 5031 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:15 crc kubenswrapper[5031]: E0129 08:55:15.658464 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert podName:5925efab-b140-47f9-9b05-309973965161 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:16.158441421 +0000 UTC m=+996.658029373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" (UID: "5925efab-b140-47f9-9b05-309973965161") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.659641 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbwj\" (UniqueName: \"kubernetes.io/projected/5b5b3ff2-7c9d-412e-8eef-a203c3096694-kube-api-access-mvbwj\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.668598 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwhf9\" (UniqueName: \"kubernetes.io/projected/3828c08a-7f8d-4d56-8aad-9fb6a7ce294a-kube-api-access-xwhf9\") pod \"manila-operator-controller-manager-765668569f-9nxrk\" (UID: \"3828c08a-7f8d-4d56-8aad-9fb6a7ce294a\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.679771 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.700745 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4pch\" (UniqueName: \"kubernetes.io/projected/b0b4b733-caa0-46a2-854a-0a96d676fe86-kube-api-access-s4pch\") pod \"mariadb-operator-controller-manager-67bf948998-r6hlv\" (UID: \"b0b4b733-caa0-46a2-854a-0a96d676fe86\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.700976 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfsh\" (UniqueName: \"kubernetes.io/projected/7771acfe-a081-49f6-afa7-79c7436486b4-kube-api-access-2nfsh\") pod \"ironic-operator-controller-manager-958664b5-tpj2j\" (UID: \"7771acfe-a081-49f6-afa7-79c7436486b4\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.703407 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6p49\" (UniqueName: \"kubernetes.io/projected/8a42f832-5088-4110-a8a9-cc3203ea4677-kube-api-access-t6p49\") pod \"keystone-operator-controller-manager-6978b79747-zhkh2\" (UID: \"8a42f832-5088-4110-a8a9-cc3203ea4677\") " pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.706884 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-46js4"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.709258 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.716899 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.728068 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4h7k5" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.749820 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppzl\" (UniqueName: \"kubernetes.io/projected/4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6-kube-api-access-zppzl\") pod \"neutron-operator-controller-manager-694c5bfc85-ltbs2\" (UID: \"4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.752962 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5qjb\" (UniqueName: \"kubernetes.io/projected/b8416e4f-a2ee-46c8-90ff-2ed68301825e-kube-api-access-c5qjb\") pod \"placement-operator-controller-manager-5b964cf4cd-6hd46\" (UID: \"b8416e4f-a2ee-46c8-90ff-2ed68301825e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.753065 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx56z\" (UniqueName: \"kubernetes.io/projected/6046088f-7960-4675-a8a6-06eb441cea9f-kube-api-access-rx56z\") pod \"ovn-operator-controller-manager-788c46999f-fn2tc\" (UID: \"6046088f-7960-4675-a8a6-06eb441cea9f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.754197 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.788761 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-46js4"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.842024 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkr29\" (UniqueName: \"kubernetes.io/projected/652f139c-6f12-42e1-88e8-fef00b383015-kube-api-access-dkr29\") pod \"octavia-operator-controller-manager-b6c99d9c5-pppjk\" (UID: \"652f139c-6f12-42e1-88e8-fef00b383015\") " pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.842689 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7pb5\" (UniqueName: \"kubernetes.io/projected/b7af41a8-c82f-4e03-b775-ad36d931b8c5-kube-api-access-k7pb5\") pod \"nova-operator-controller-manager-ddcbfd695-hhbpv\" (UID: \"b7af41a8-c82f-4e03-b775-ad36d931b8c5\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.843565 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8zhl\" (UniqueName: \"kubernetes.io/projected/5925efab-b140-47f9-9b05-309973965161-kube-api-access-f8zhl\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.857221 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5qjb\" (UniqueName: \"kubernetes.io/projected/b8416e4f-a2ee-46c8-90ff-2ed68301825e-kube-api-access-c5qjb\") pod \"placement-operator-controller-manager-5b964cf4cd-6hd46\" (UID: \"b8416e4f-a2ee-46c8-90ff-2ed68301825e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.857317 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx56z\" (UniqueName: \"kubernetes.io/projected/6046088f-7960-4675-a8a6-06eb441cea9f-kube-api-access-rx56z\") pod \"ovn-operator-controller-manager-788c46999f-fn2tc\" (UID: \"6046088f-7960-4675-a8a6-06eb441cea9f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.857436 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.857502 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkvh\" (UniqueName: \"kubernetes.io/projected/3fb6584b-e21d-4c41-af40-6099ceda26fe-kube-api-access-dmkvh\") pod \"swift-operator-controller-manager-68fc8c869-46js4\" (UID: \"3fb6584b-e21d-4c41-af40-6099ceda26fe\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" Jan 29 08:55:15 crc kubenswrapper[5031]: E0129 08:55:15.857748 5031 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:15 crc kubenswrapper[5031]: E0129 08:55:15.857798 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert podName:5b5b3ff2-7c9d-412e-8eef-a203c3096694 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:16.857782292 +0000 UTC m=+997.357370244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert") pod "infra-operator-controller-manager-79955696d6-8dpt8" (UID: "5b5b3ff2-7c9d-412e-8eef-a203c3096694") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.865432 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.866360 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.868396 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-tcthp" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.882231 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.885184 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5qjb\" (UniqueName: \"kubernetes.io/projected/b8416e4f-a2ee-46c8-90ff-2ed68301825e-kube-api-access-c5qjb\") pod \"placement-operator-controller-manager-5b964cf4cd-6hd46\" (UID: \"b8416e4f-a2ee-46c8-90ff-2ed68301825e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.896447 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.897288 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.905308 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-n8wrs" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.917274 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.918278 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx56z\" (UniqueName: \"kubernetes.io/projected/6046088f-7960-4675-a8a6-06eb441cea9f-kube-api-access-rx56z\") pod \"ovn-operator-controller-manager-788c46999f-fn2tc\" (UID: \"6046088f-7960-4675-a8a6-06eb441cea9f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.924778 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.928442 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.929622 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.944406 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-h68vv" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.964383 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm"] Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.968422 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m68tm\" (UniqueName: \"kubernetes.io/projected/418034d3-f759-4efa-930f-c66f10db0fe2-kube-api-access-m68tm\") pod \"test-operator-controller-manager-56f8bfcd9f-tgkd9\" (UID: \"418034d3-f759-4efa-930f-c66f10db0fe2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.968468 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdsq8\" (UniqueName: \"kubernetes.io/projected/4e1db845-0d5b-489a-b3bf-a2921dc81cdb-kube-api-access-bdsq8\") pod \"watcher-operator-controller-manager-767b8bc766-vt2wm\" (UID: \"4e1db845-0d5b-489a-b3bf-a2921dc81cdb\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.968509 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkvh\" (UniqueName: \"kubernetes.io/projected/3fb6584b-e21d-4c41-af40-6099ceda26fe-kube-api-access-dmkvh\") pod \"swift-operator-controller-manager-68fc8c869-46js4\" (UID: \"3fb6584b-e21d-4c41-af40-6099ceda26fe\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.968563 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67sxj\" (UniqueName: \"kubernetes.io/projected/f2eaf23b-b589-4c35-bb14-28a1aa1d9099-kube-api-access-67sxj\") pod \"telemetry-operator-controller-manager-684f4d697d-h5vhw\" (UID: \"f2eaf23b-b589-4c35-bb14-28a1aa1d9099\") " pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" Jan 29 08:55:15 crc kubenswrapper[5031]: I0129 08:55:15.972019 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.023921 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx"] Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.026018 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkvh\" (UniqueName: \"kubernetes.io/projected/3fb6584b-e21d-4c41-af40-6099ceda26fe-kube-api-access-dmkvh\") pod \"swift-operator-controller-manager-68fc8c869-46js4\" (UID: \"3fb6584b-e21d-4c41-af40-6099ceda26fe\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.033953 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.034278 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.040745 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx"] Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.040793 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.040980 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.041148 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zs86n" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.070194 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.070265 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m68tm\" (UniqueName: \"kubernetes.io/projected/418034d3-f759-4efa-930f-c66f10db0fe2-kube-api-access-m68tm\") pod \"test-operator-controller-manager-56f8bfcd9f-tgkd9\" (UID: \"418034d3-f759-4efa-930f-c66f10db0fe2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.070292 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdsq8\" (UniqueName: \"kubernetes.io/projected/4e1db845-0d5b-489a-b3bf-a2921dc81cdb-kube-api-access-bdsq8\") pod \"watcher-operator-controller-manager-767b8bc766-vt2wm\" (UID: \"4e1db845-0d5b-489a-b3bf-a2921dc81cdb\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.070335 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.070382 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht2k4\" (UniqueName: \"kubernetes.io/projected/bacd8bd3-412c-435e-b71d-e43f39daba5d-kube-api-access-ht2k4\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.070436 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67sxj\" (UniqueName: \"kubernetes.io/projected/f2eaf23b-b589-4c35-bb14-28a1aa1d9099-kube-api-access-67sxj\") pod \"telemetry-operator-controller-manager-684f4d697d-h5vhw\" (UID: \"f2eaf23b-b589-4c35-bb14-28a1aa1d9099\") " pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.084730 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.089555 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7"] Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.090721 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.095898 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kw5qv" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.105736 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67sxj\" (UniqueName: \"kubernetes.io/projected/f2eaf23b-b589-4c35-bb14-28a1aa1d9099-kube-api-access-67sxj\") pod \"telemetry-operator-controller-manager-684f4d697d-h5vhw\" (UID: \"f2eaf23b-b589-4c35-bb14-28a1aa1d9099\") " pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.112735 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7"] Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.114184 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m68tm\" (UniqueName: \"kubernetes.io/projected/418034d3-f759-4efa-930f-c66f10db0fe2-kube-api-access-m68tm\") pod \"test-operator-controller-manager-56f8bfcd9f-tgkd9\" (UID: \"418034d3-f759-4efa-930f-c66f10db0fe2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.120920 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdsq8\" (UniqueName: \"kubernetes.io/projected/4e1db845-0d5b-489a-b3bf-a2921dc81cdb-kube-api-access-bdsq8\") pod \"watcher-operator-controller-manager-767b8bc766-vt2wm\" (UID: \"4e1db845-0d5b-489a-b3bf-a2921dc81cdb\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.129884 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.181051 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.181454 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.181309 5031 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.181612 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:16.681596735 +0000 UTC m=+997.181184687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "webhook-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.181562 5031 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.181823 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:16.68181524 +0000 UTC m=+997.181403192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "metrics-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.181853 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht2k4\" (UniqueName: \"kubernetes.io/projected/bacd8bd3-412c-435e-b71d-e43f39daba5d-kube-api-access-ht2k4\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.181898 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkgkv\" (UniqueName: \"kubernetes.io/projected/c3b8b573-36e5-48c9-bfb5-adff7608c393-kube-api-access-zkgkv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rwmm7\" (UID: \"c3b8b573-36e5-48c9-bfb5-adff7608c393\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.181943 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.182017 5031 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.182038 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert podName:5925efab-b140-47f9-9b05-309973965161 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:17.182031397 +0000 UTC m=+997.681619349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" (UID: "5925efab-b140-47f9-9b05-309973965161") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.212176 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.218346 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht2k4\" (UniqueName: \"kubernetes.io/projected/bacd8bd3-412c-435e-b71d-e43f39daba5d-kube-api-access-ht2k4\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.258063 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.273166 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.282977 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkgkv\" (UniqueName: \"kubernetes.io/projected/c3b8b573-36e5-48c9-bfb5-adff7608c393-kube-api-access-zkgkv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rwmm7\" (UID: \"c3b8b573-36e5-48c9-bfb5-adff7608c393\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.304079 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkgkv\" (UniqueName: \"kubernetes.io/projected/c3b8b573-36e5-48c9-bfb5-adff7608c393-kube-api-access-zkgkv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rwmm7\" (UID: \"c3b8b573-36e5-48c9-bfb5-adff7608c393\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.384542 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.396011 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.459578 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.565484 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7"] Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.697907 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.698279 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.698435 5031 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.698486 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:17.698469147 +0000 UTC m=+998.198057109 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "metrics-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.698832 5031 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.698858 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:17.698849398 +0000 UTC m=+998.198437350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "webhook-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.788002 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5"] Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.815974 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm"] Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.870680 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" event={"ID":"59d726a8-dfae-47c6-a479-682b32601f3b","Type":"ContainerStarted","Data":"a3dc6caab6f59ffd2bfb66efaf4ddad34516a8a22f8fac940b301614a435d8e1"} Jan 29 08:55:16 crc kubenswrapper[5031]: I0129 08:55:16.901170 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.901378 5031 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:16 crc kubenswrapper[5031]: E0129 08:55:16.901458 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert podName:5b5b3ff2-7c9d-412e-8eef-a203c3096694 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:18.901440168 +0000 UTC m=+999.401028120 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert") pod "infra-operator-controller-manager-79955696d6-8dpt8" (UID: "5b5b3ff2-7c9d-412e-8eef-a203c3096694") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.080523 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.199763 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.207757 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.207996 5031 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.208051 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert podName:5925efab-b140-47f9-9b05-309973965161 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:19.208037092 +0000 UTC m=+999.707625044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" (UID: "5925efab-b140-47f9-9b05-309973965161") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.209966 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfef04ed6_9416_4599_a960_cde56635da29.slice/crio-6276cb06da390d03a35a4ab0e44dfe802b05503ee0e44e2bd5f5633ab3a80a8c WatchSource:0}: Error finding container 6276cb06da390d03a35a4ab0e44dfe802b05503ee0e44e2bd5f5633ab3a80a8c: Status 404 returned error can't find the container with id 6276cb06da390d03a35a4ab0e44dfe802b05503ee0e44e2bd5f5633ab3a80a8c Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.236674 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq"] Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.244228 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d7a2eca_248d_464e_b698_5f4daee374d3.slice/crio-db951128fd9d7e2d45652651b5d54e55a19e441cb8f71aa0a867d988e116811a WatchSource:0}: Error finding container db951128fd9d7e2d45652651b5d54e55a19e441cb8f71aa0a867d988e116811a: Status 404 returned error can't find the container with id db951128fd9d7e2d45652651b5d54e55a19e441cb8f71aa0a867d988e116811a Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.521711 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-9nxrk"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.535903 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm"] Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.537549 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3828c08a_7f8d_4d56_8aad_9fb6a7ce294a.slice/crio-94cc48175b905cbb2a5cd797d238e309f4899a3d4e7005efb5de0703301dd625 WatchSource:0}: Error finding container 94cc48175b905cbb2a5cd797d238e309f4899a3d4e7005efb5de0703301dd625: Status 404 returned error can't find the container with id 94cc48175b905cbb2a5cd797d238e309f4899a3d4e7005efb5de0703301dd625 Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.544115 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.564603 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.581053 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.599314 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.608025 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.615076 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.623139 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.632579 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-46js4"] Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.635638 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2eaf23b_b589_4c35_bb14_28a1aa1d9099.slice/crio-89f2219c1f2bfe1b474dc38cf2cd48de1e1487a314664f09685327bb663e68a2 WatchSource:0}: Error finding container 89f2219c1f2bfe1b474dc38cf2cd48de1e1487a314664f09685327bb663e68a2: Status 404 returned error can't find the container with id 89f2219c1f2bfe1b474dc38cf2cd48de1e1487a314664f09685327bb663e68a2 Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.638170 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk"] Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.639567 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8416e4f_a2ee_46c8_90ff_2ed68301825e.slice/crio-aec09f4ed4d34fa715c90502fc0dd9a6fdcdcd2fbe805a8e6fa1554a0659ffa5 WatchSource:0}: Error finding container aec09f4ed4d34fa715c90502fc0dd9a6fdcdcd2fbe805a8e6fa1554a0659ffa5: Status 404 returned error can't find the container with id aec09f4ed4d34fa715c90502fc0dd9a6fdcdcd2fbe805a8e6fa1554a0659ffa5 Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.641672 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod418034d3_f759_4efa_930f_c66f10db0fe2.slice/crio-e21801b6eeda24fb38a7fa22752b1329bfd28fab82e87d458b8522d76bbc52dc WatchSource:0}: Error finding container e21801b6eeda24fb38a7fa22752b1329bfd28fab82e87d458b8522d76bbc52dc: Status 404 returned error can't find the container with id e21801b6eeda24fb38a7fa22752b1329bfd28fab82e87d458b8522d76bbc52dc Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.644092 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0b4b733_caa0_46a2_854a_0a96d676fe86.slice/crio-751c707874c75c23df4b269457d6db510434ff4e6c3a21e9b5fc8d7f25e3471d WatchSource:0}: Error finding container 751c707874c75c23df4b269457d6db510434ff4e6c3a21e9b5fc8d7f25e3471d: Status 404 returned error can't find the container with id 751c707874c75c23df4b269457d6db510434ff4e6c3a21e9b5fc8d7f25e3471d Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.644214 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5qjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-6hd46_openstack-operators(b8416e4f-a2ee-46c8-90ff-2ed68301825e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.644337 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1e7734e8d3be22f053bbcddbe5dfd2b383ca0ad81b916d6447bc8d035321c001,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67sxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-684f4d697d-h5vhw_openstack-operators(f2eaf23b-b589-4c35-bb14-28a1aa1d9099): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.644432 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9"] Jan 29 08:55:17 crc kubenswrapper[5031]: W0129 08:55:17.645262 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b8b573_36e5_48c9_bfb5_adff7608c393.slice/crio-0ac7c80406b3253ecc9722ec1f5f2536e20cf3a691754e2685d7ed4456ce4662 WatchSource:0}: Error finding container 0ac7c80406b3253ecc9722ec1f5f2536e20cf3a691754e2685d7ed4456ce4662: Status 404 returned error can't find the container with id 0ac7c80406b3253ecc9722ec1f5f2536e20cf3a691754e2685d7ed4456ce4662 Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.645290 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" podUID="b8416e4f-a2ee-46c8-90ff-2ed68301825e" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.645523 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" podUID="f2eaf23b-b589-4c35-bb14-28a1aa1d9099" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.645772 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4pch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-r6hlv_openstack-operators(b0b4b733-caa0-46a2-854a-0a96d676fe86): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.645898 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:279d0fc97ed93182d70f3c13a43a3bb07a9d54998da2d7e24fc35175428e908a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkr29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-b6c99d9c5-pppjk_openstack-operators(652f139c-6f12-42e1-88e8-fef00b383015): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.647439 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" podUID="652f139c-6f12-42e1-88e8-fef00b383015" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.647491 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" podUID="b0b4b733-caa0-46a2-854a-0a96d676fe86" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.647495 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m68tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-tgkd9_openstack-operators(418034d3-f759-4efa-930f-c66f10db0fe2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.649054 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" podUID="418034d3-f759-4efa-930f-c66f10db0fe2" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.649233 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkgkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rwmm7_openstack-operators(c3b8b573-36e5-48c9-bfb5-adff7608c393): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.650803 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" podUID="c3b8b573-36e5-48c9-bfb5-adff7608c393" Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.655609 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.666355 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7"] Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.717682 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.717775 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.717843 5031 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.717900 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:19.717885874 +0000 UTC m=+1000.217473826 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "webhook-server-cert" not found Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.718010 5031 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.718098 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:19.7180658 +0000 UTC m=+1000.217653912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "metrics-server-cert" not found Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.883810 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" event={"ID":"a1850026-d710-4da7-883b-1b7149900523","Type":"ContainerStarted","Data":"73d606582a0fe717c04d841009c4ae6c3e116d603d49b6b0299b4526f44067e5"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.885645 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" event={"ID":"b8416e4f-a2ee-46c8-90ff-2ed68301825e","Type":"ContainerStarted","Data":"aec09f4ed4d34fa715c90502fc0dd9a6fdcdcd2fbe805a8e6fa1554a0659ffa5"} Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.887456 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" podUID="b8416e4f-a2ee-46c8-90ff-2ed68301825e" Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.888150 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" event={"ID":"4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6","Type":"ContainerStarted","Data":"4cbf5256e1cdb6053e8145c2b5c60c374c21a6e5c8a1d960c3e1feac61b43d90"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.890738 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" event={"ID":"6046088f-7960-4675-a8a6-06eb441cea9f","Type":"ContainerStarted","Data":"d236d5d58f13c5e84ee56aceaf997697d5777d99951016eb71b6d1014419f248"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.909182 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" event={"ID":"b0b4b733-caa0-46a2-854a-0a96d676fe86","Type":"ContainerStarted","Data":"751c707874c75c23df4b269457d6db510434ff4e6c3a21e9b5fc8d7f25e3471d"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.911424 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" event={"ID":"418034d3-f759-4efa-930f-c66f10db0fe2","Type":"ContainerStarted","Data":"e21801b6eeda24fb38a7fa22752b1329bfd28fab82e87d458b8522d76bbc52dc"} Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.913040 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" podUID="418034d3-f759-4efa-930f-c66f10db0fe2" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.913292 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" podUID="b0b4b733-caa0-46a2-854a-0a96d676fe86" Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.918197 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" event={"ID":"9d7a2eca-248d-464e-b698-5f4daee374d3","Type":"ContainerStarted","Data":"db951128fd9d7e2d45652651b5d54e55a19e441cb8f71aa0a867d988e116811a"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.921246 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" event={"ID":"3828c08a-7f8d-4d56-8aad-9fb6a7ce294a","Type":"ContainerStarted","Data":"94cc48175b905cbb2a5cd797d238e309f4899a3d4e7005efb5de0703301dd625"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.924397 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" event={"ID":"7771acfe-a081-49f6-afa7-79c7436486b4","Type":"ContainerStarted","Data":"76004066a92c3eb24315b63a3fc25d5b8eca712d2d36f09455805b2e62f458c2"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.925912 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" event={"ID":"4e1db845-0d5b-489a-b3bf-a2921dc81cdb","Type":"ContainerStarted","Data":"718d2036a22bd37b74152929d6efcf17a9b95d94201ae91d18259370872a3a45"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.929855 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" event={"ID":"3fb6584b-e21d-4c41-af40-6099ceda26fe","Type":"ContainerStarted","Data":"b70d1faa36bcd820c4a450056cff392131ebbd0386cfe3f7195f74931004ca02"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.938741 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" event={"ID":"8a42f832-5088-4110-a8a9-cc3203ea4677","Type":"ContainerStarted","Data":"295e6e1a2c630b2cbf92112376039aa9b0c2e7c8739960373e4a5ad9f7d079aa"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.940505 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" event={"ID":"652f139c-6f12-42e1-88e8-fef00b383015","Type":"ContainerStarted","Data":"4e6e7d7cbab8d72140a30cfc6bb458516b608c1223dbacb1fa939f2024f7eaf2"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.941634 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" event={"ID":"f2eaf23b-b589-4c35-bb14-28a1aa1d9099","Type":"ContainerStarted","Data":"89f2219c1f2bfe1b474dc38cf2cd48de1e1487a314664f09685327bb663e68a2"} Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.942466 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:279d0fc97ed93182d70f3c13a43a3bb07a9d54998da2d7e24fc35175428e908a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" podUID="652f139c-6f12-42e1-88e8-fef00b383015" Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.944921 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1e7734e8d3be22f053bbcddbe5dfd2b383ca0ad81b916d6447bc8d035321c001\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" podUID="f2eaf23b-b589-4c35-bb14-28a1aa1d9099" Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.944934 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" event={"ID":"911c19b6-72d1-4363-bae0-02bb5290a0c3","Type":"ContainerStarted","Data":"16b44abadb57257b06cf09749363021885e1b4335f77e58b73a3b56beeab7f61"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.946082 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" event={"ID":"6b581b93-53b8-4bda-a3bc-7ab837f7aec3","Type":"ContainerStarted","Data":"b247c66695942344cd7b8af5c3aef1fd2c04de04f6bee69d4c68cda52b6a0d6a"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.958234 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" event={"ID":"c3b8b573-36e5-48c9-bfb5-adff7608c393","Type":"ContainerStarted","Data":"0ac7c80406b3253ecc9722ec1f5f2536e20cf3a691754e2685d7ed4456ce4662"} Jan 29 08:55:17 crc kubenswrapper[5031]: E0129 08:55:17.961420 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" podUID="c3b8b573-36e5-48c9-bfb5-adff7608c393" Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.961944 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" event={"ID":"b7af41a8-c82f-4e03-b775-ad36d931b8c5","Type":"ContainerStarted","Data":"8694748bc0b6b7f82f2d159ebc3a367720d8dcdcbf1aeb678bd91cef77f409f3"} Jan 29 08:55:17 crc kubenswrapper[5031]: I0129 08:55:17.973082 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" event={"ID":"fef04ed6-9416-4599-a960-cde56635da29","Type":"ContainerStarted","Data":"6276cb06da390d03a35a4ab0e44dfe802b05503ee0e44e2bd5f5633ab3a80a8c"} Jan 29 08:55:18 crc kubenswrapper[5031]: I0129 08:55:18.963995 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:18 crc kubenswrapper[5031]: E0129 08:55:18.964283 5031 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:18 crc kubenswrapper[5031]: E0129 08:55:18.964677 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert podName:5b5b3ff2-7c9d-412e-8eef-a203c3096694 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:22.964487042 +0000 UTC m=+1003.464074994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert") pod "infra-operator-controller-manager-79955696d6-8dpt8" (UID: "5b5b3ff2-7c9d-412e-8eef-a203c3096694") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.006807 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" podUID="c3b8b573-36e5-48c9-bfb5-adff7608c393" Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.007195 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" podUID="418034d3-f759-4efa-930f-c66f10db0fe2" Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.007303 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" podUID="b8416e4f-a2ee-46c8-90ff-2ed68301825e" Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.007404 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:279d0fc97ed93182d70f3c13a43a3bb07a9d54998da2d7e24fc35175428e908a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" podUID="652f139c-6f12-42e1-88e8-fef00b383015" Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.007880 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1e7734e8d3be22f053bbcddbe5dfd2b383ca0ad81b916d6447bc8d035321c001\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" podUID="f2eaf23b-b589-4c35-bb14-28a1aa1d9099" Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.013489 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" podUID="b0b4b733-caa0-46a2-854a-0a96d676fe86" Jan 29 08:55:19 crc kubenswrapper[5031]: I0129 08:55:19.269868 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.270138 5031 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.270219 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert podName:5925efab-b140-47f9-9b05-309973965161 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:23.270199112 +0000 UTC m=+1003.769787064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" (UID: "5925efab-b140-47f9-9b05-309973965161") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:19 crc kubenswrapper[5031]: I0129 08:55:19.779326 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:19 crc kubenswrapper[5031]: I0129 08:55:19.779458 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.779603 5031 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.779662 5031 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.779707 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:23.779677295 +0000 UTC m=+1004.279265247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "webhook-server-cert" not found Jan 29 08:55:19 crc kubenswrapper[5031]: E0129 08:55:19.779751 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:23.779733566 +0000 UTC m=+1004.279321518 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "metrics-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: I0129 08:55:23.063024 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.063263 5031 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.063633 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert podName:5b5b3ff2-7c9d-412e-8eef-a203c3096694 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:31.063610745 +0000 UTC m=+1011.563198697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert") pod "infra-operator-controller-manager-79955696d6-8dpt8" (UID: "5b5b3ff2-7c9d-412e-8eef-a203c3096694") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: I0129 08:55:23.367902 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.368082 5031 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.368176 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert podName:5925efab-b140-47f9-9b05-309973965161 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:31.368157933 +0000 UTC m=+1011.867746075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" (UID: "5925efab-b140-47f9-9b05-309973965161") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: I0129 08:55:23.875938 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:23 crc kubenswrapper[5031]: I0129 08:55:23.876278 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.876332 5031 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.876443 5031 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.876473 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:31.876454403 +0000 UTC m=+1012.376042355 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "webhook-server-cert" not found Jan 29 08:55:23 crc kubenswrapper[5031]: E0129 08:55:23.876494 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:31.876486754 +0000 UTC m=+1012.376074706 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "metrics-server-cert" not found Jan 29 08:55:31 crc kubenswrapper[5031]: I0129 08:55:31.083516 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:31 crc kubenswrapper[5031]: I0129 08:55:31.089457 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b5b3ff2-7c9d-412e-8eef-a203c3096694-cert\") pod \"infra-operator-controller-manager-79955696d6-8dpt8\" (UID: \"5b5b3ff2-7c9d-412e-8eef-a203c3096694\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:31 crc kubenswrapper[5031]: I0129 08:55:31.341683 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:55:31 crc kubenswrapper[5031]: I0129 08:55:31.386983 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.387158 5031 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.387223 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert podName:5925efab-b140-47f9-9b05-309973965161 nodeName:}" failed. No retries permitted until 2026-01-29 08:55:47.387205258 +0000 UTC m=+1027.886793210 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" (UID: "5925efab-b140-47f9-9b05-309973965161") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.797906 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da" Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.798157 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xwhf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-765668569f-9nxrk_openstack-operators(3828c08a-7f8d-4d56-8aad-9fb6a7ce294a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.799338 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" podUID="3828c08a-7f8d-4d56-8aad-9fb6a7ce294a" Jan 29 08:55:31 crc kubenswrapper[5031]: I0129 08:55:31.893506 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:31 crc kubenswrapper[5031]: I0129 08:55:31.893811 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.893703 5031 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.893924 5031 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.893942 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:47.893920265 +0000 UTC m=+1028.393508217 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "webhook-server-cert" not found Jan 29 08:55:31 crc kubenswrapper[5031]: E0129 08:55:31.893964 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs podName:bacd8bd3-412c-435e-b71d-e43f39daba5d nodeName:}" failed. No retries permitted until 2026-01-29 08:55:47.893953966 +0000 UTC m=+1028.393541918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs") pod "openstack-operator-controller-manager-7fd9db8655-wjbcx" (UID: "bacd8bd3-412c-435e-b71d-e43f39daba5d") : secret "metrics-server-cert" not found Jan 29 08:55:32 crc kubenswrapper[5031]: E0129 08:55:32.357519 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da\\\"\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" podUID="3828c08a-7f8d-4d56-8aad-9fb6a7ce294a" Jan 29 08:55:33 crc kubenswrapper[5031]: E0129 08:55:33.574775 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:45ef0b95f941479535575b3d2cabb58a52e1d8490eed3da1bca9acd49344a722" Jan 29 08:55:33 crc kubenswrapper[5031]: E0129 08:55:33.574948 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:45ef0b95f941479535575b3d2cabb58a52e1d8490eed3da1bca9acd49344a722,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6p49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6978b79747-zhkh2_openstack-operators(8a42f832-5088-4110-a8a9-cc3203ea4677): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:33 crc kubenswrapper[5031]: E0129 08:55:33.576420 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" podUID="8a42f832-5088-4110-a8a9-cc3203ea4677" Jan 29 08:55:34 crc kubenswrapper[5031]: E0129 08:55:34.402294 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:45ef0b95f941479535575b3d2cabb58a52e1d8490eed3da1bca9acd49344a722\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" podUID="8a42f832-5088-4110-a8a9-cc3203ea4677" Jan 29 08:55:34 crc kubenswrapper[5031]: E0129 08:55:34.410271 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d" Jan 29 08:55:34 crc kubenswrapper[5031]: E0129 08:55:34.410458 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4blcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66dfbd6f5d-f5hc7_openstack-operators(59d726a8-dfae-47c6-a479-682b32601f3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:34 crc kubenswrapper[5031]: E0129 08:55:34.411612 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" podUID="59d726a8-dfae-47c6-a479-682b32601f3b" Jan 29 08:55:35 crc kubenswrapper[5031]: E0129 08:55:35.189289 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:eae1fc0ecdfc4f0bef5a980affa60155a5baacf1bdaaeeb18d9c2680f762bc9d" Jan 29 08:55:35 crc kubenswrapper[5031]: E0129 08:55:35.189806 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:eae1fc0ecdfc4f0bef5a980affa60155a5baacf1bdaaeeb18d9c2680f762bc9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6zd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-6bc7f4f4cf-6pqwq_openstack-operators(9d7a2eca-248d-464e-b698-5f4daee374d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:35 crc kubenswrapper[5031]: E0129 08:55:35.191599 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" podUID="9d7a2eca-248d-464e-b698-5f4daee374d3" Jan 29 08:55:35 crc kubenswrapper[5031]: E0129 08:55:35.409916 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" podUID="59d726a8-dfae-47c6-a479-682b32601f3b" Jan 29 08:55:35 crc kubenswrapper[5031]: E0129 08:55:35.410806 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:eae1fc0ecdfc4f0bef5a980affa60155a5baacf1bdaaeeb18d9c2680f762bc9d\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" podUID="9d7a2eca-248d-464e-b698-5f4daee374d3" Jan 29 08:55:36 crc kubenswrapper[5031]: E0129 08:55:36.044406 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 29 08:55:36 crc kubenswrapper[5031]: E0129 08:55:36.044582 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmkvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-46js4_openstack-operators(3fb6584b-e21d-4c41-af40-6099ceda26fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:36 crc kubenswrapper[5031]: E0129 08:55:36.045838 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" podUID="3fb6584b-e21d-4c41-af40-6099ceda26fe" Jan 29 08:55:36 crc kubenswrapper[5031]: E0129 08:55:36.414995 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" podUID="3fb6584b-e21d-4c41-af40-6099ceda26fe" Jan 29 08:55:37 crc kubenswrapper[5031]: E0129 08:55:37.132176 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 08:55:37 crc kubenswrapper[5031]: E0129 08:55:37.132415 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6n68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-ftmh8_openstack-operators(911c19b6-72d1-4363-bae0-02bb5290a0c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:37 crc kubenswrapper[5031]: E0129 08:55:37.133633 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" podUID="911c19b6-72d1-4363-bae0-02bb5290a0c3" Jan 29 08:55:37 crc kubenswrapper[5031]: E0129 08:55:37.434258 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" podUID="911c19b6-72d1-4363-bae0-02bb5290a0c3" Jan 29 08:55:37 crc kubenswrapper[5031]: E0129 08:55:37.671815 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61" Jan 29 08:55:37 crc kubenswrapper[5031]: E0129 08:55:37.671989 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k7pb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-hhbpv_openstack-operators(b7af41a8-c82f-4e03-b775-ad36d931b8c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:37 crc kubenswrapper[5031]: E0129 08:55:37.673215 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" podUID="b7af41a8-c82f-4e03-b775-ad36d931b8c5" Jan 29 08:55:38 crc kubenswrapper[5031]: E0129 08:55:38.438703 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" podUID="b7af41a8-c82f-4e03-b775-ad36d931b8c5" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.402636 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.412133 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5925efab-b140-47f9-9b05-309973965161-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp\" (UID: \"5925efab-b140-47f9-9b05-309973965161\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.696133 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jwbs2" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.701496 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.910936 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.911068 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.916109 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.919459 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bacd8bd3-412c-435e-b71d-e43f39daba5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fd9db8655-wjbcx\" (UID: \"bacd8bd3-412c-435e-b71d-e43f39daba5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.932585 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zs86n" Jan 29 08:55:47 crc kubenswrapper[5031]: I0129 08:55:47.940503 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:50 crc kubenswrapper[5031]: E0129 08:55:50.570392 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 08:55:50 crc kubenswrapper[5031]: E0129 08:55:50.570940 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5qjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-6hd46_openstack-operators(b8416e4f-a2ee-46c8-90ff-2ed68301825e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:50 crc kubenswrapper[5031]: E0129 08:55:50.573971 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" podUID="b8416e4f-a2ee-46c8-90ff-2ed68301825e" Jan 29 08:55:53 crc kubenswrapper[5031]: E0129 08:55:53.160039 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 08:55:53 crc kubenswrapper[5031]: E0129 08:55:53.160714 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkgkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rwmm7_openstack-operators(c3b8b573-36e5-48c9-bfb5-adff7608c393): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:55:53 crc kubenswrapper[5031]: E0129 08:55:53.163609 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" podUID="c3b8b573-36e5-48c9-bfb5-adff7608c393" Jan 29 08:55:53 crc kubenswrapper[5031]: I0129 08:55:53.534099 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" event={"ID":"4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6","Type":"ContainerStarted","Data":"a38ff299eed0122bc3c8b4a711efeefad0312f7e99cecbc7b3f2e588728ca459"} Jan 29 08:55:53 crc kubenswrapper[5031]: I0129 08:55:53.535427 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" Jan 29 08:55:53 crc kubenswrapper[5031]: I0129 08:55:53.555313 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" podStartSLOduration=17.950504654 podStartE2EDuration="38.555290006s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.618490176 +0000 UTC m=+998.118078128" lastFinishedPulling="2026-01-29 08:55:38.223275508 +0000 UTC m=+1018.722863480" observedRunningTime="2026-01-29 08:55:53.549457777 +0000 UTC m=+1034.049045749" watchObservedRunningTime="2026-01-29 08:55:53.555290006 +0000 UTC m=+1034.054877958" Jan 29 08:55:53 crc kubenswrapper[5031]: I0129 08:55:53.610903 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8"] Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.450922 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp"] Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.541629 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" event={"ID":"5b5b3ff2-7c9d-412e-8eef-a203c3096694","Type":"ContainerStarted","Data":"c1501a3ccc6f2ff19c8dc0c911a5ebe368aa0f4293571f6a66c556a8d40bfb2d"} Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.547175 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" event={"ID":"6046088f-7960-4675-a8a6-06eb441cea9f","Type":"ContainerStarted","Data":"c2661443c56929484372d21c4fa0187a3ba7eb2295e72fee5c46eba42d420b26"} Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.547337 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.550861 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" event={"ID":"6b581b93-53b8-4bda-a3bc-7ab837f7aec3","Type":"ContainerStarted","Data":"a670bef93580e6386abcfce5109eff4dba8638adf902230148a265544f0ff122"} Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.552024 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" event={"ID":"7771acfe-a081-49f6-afa7-79c7436486b4","Type":"ContainerStarted","Data":"2274e5cc07011c671dcb133cd5adc70c81f67a5eb760fcd3c9353fab8f82424e"} Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.552633 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.553619 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" event={"ID":"a1850026-d710-4da7-883b-1b7149900523","Type":"ContainerStarted","Data":"d291a9a46cd29a87aebea8a9936858eff249ab1db006af3f86a6cb03c388764c"} Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.553997 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.558128 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" event={"ID":"4e1db845-0d5b-489a-b3bf-a2921dc81cdb","Type":"ContainerStarted","Data":"0cbc1eb483f2185a9021f009d06094532aaef9429fb31f030980fb38316c0d6b"} Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.558772 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.587557 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx"] Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.594899 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" podStartSLOduration=16.699728432 podStartE2EDuration="39.594880471s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.627488162 +0000 UTC m=+998.127076114" lastFinishedPulling="2026-01-29 08:55:40.522640201 +0000 UTC m=+1021.022228153" observedRunningTime="2026-01-29 08:55:54.586710179 +0000 UTC m=+1035.086298141" watchObservedRunningTime="2026-01-29 08:55:54.594880471 +0000 UTC m=+1035.094468423" Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.600880 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" event={"ID":"5925efab-b140-47f9-9b05-309973965161","Type":"ContainerStarted","Data":"7840d9f91acfcfe94a983eb207ac53c4a39ff59beebcc06276c560de021179e4"} Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.717613 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" podStartSLOduration=20.086136526 podStartE2EDuration="40.717595115s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.591697006 +0000 UTC m=+998.091284958" lastFinishedPulling="2026-01-29 08:55:38.223155595 +0000 UTC m=+1018.722743547" observedRunningTime="2026-01-29 08:55:54.631696814 +0000 UTC m=+1035.131284766" watchObservedRunningTime="2026-01-29 08:55:54.717595115 +0000 UTC m=+1035.217183067" Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.718005 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" podStartSLOduration=19.390613086 podStartE2EDuration="40.717999346s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:16.896993367 +0000 UTC m=+997.396581319" lastFinishedPulling="2026-01-29 08:55:38.224379637 +0000 UTC m=+1018.723967579" observedRunningTime="2026-01-29 08:55:54.69720464 +0000 UTC m=+1035.196792592" watchObservedRunningTime="2026-01-29 08:55:54.717999346 +0000 UTC m=+1035.217587298" Jan 29 08:55:54 crc kubenswrapper[5031]: I0129 08:55:54.729555 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" podStartSLOduration=19.063166852 podStartE2EDuration="39.729535851s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.559476738 +0000 UTC m=+998.059064690" lastFinishedPulling="2026-01-29 08:55:38.225845737 +0000 UTC m=+1018.725433689" observedRunningTime="2026-01-29 08:55:54.728527683 +0000 UTC m=+1035.228115645" watchObservedRunningTime="2026-01-29 08:55:54.729535851 +0000 UTC m=+1035.229123803" Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.729994 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" event={"ID":"b0b4b733-caa0-46a2-854a-0a96d676fe86","Type":"ContainerStarted","Data":"c5294b1688026c064a96a3ffa9d73bd50b6c2a443c4c74bd5451a15a0885ef87"} Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.731354 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.773969 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" event={"ID":"418034d3-f759-4efa-930f-c66f10db0fe2","Type":"ContainerStarted","Data":"92af05f0ccb0d5dce12a06a076c6e4f785ee0241b69eb4f5df347a08f4d93e70"} Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.774561 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.780406 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" podStartSLOduration=5.256104524 podStartE2EDuration="40.780389964s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.645636866 +0000 UTC m=+998.145224818" lastFinishedPulling="2026-01-29 08:55:53.169922306 +0000 UTC m=+1033.669510258" observedRunningTime="2026-01-29 08:55:55.779089618 +0000 UTC m=+1036.278677570" watchObservedRunningTime="2026-01-29 08:55:55.780389964 +0000 UTC m=+1036.279977916" Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.788778 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" event={"ID":"652f139c-6f12-42e1-88e8-fef00b383015","Type":"ContainerStarted","Data":"c10a095f6becc6935c15ad55a53be6bf37b67b1b870439d7d0e0426d686a072a"} Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.789097 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.993318 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" event={"ID":"bacd8bd3-412c-435e-b71d-e43f39daba5d","Type":"ContainerStarted","Data":"e0f2dff638049431f28dc0fcb95ca2c436adef29eb3960d154109fdd93dbefc2"} Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.993379 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" event={"ID":"bacd8bd3-412c-435e-b71d-e43f39daba5d","Type":"ContainerStarted","Data":"f8fa41d87582eac6c10a0e2f6be421b9b265d2954030c743fae8750560ecc171"} Jan 29 08:55:55 crc kubenswrapper[5031]: I0129 08:55:55.994101 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.003423 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" event={"ID":"911c19b6-72d1-4363-bae0-02bb5290a0c3","Type":"ContainerStarted","Data":"e65327fd261efbf1f3fae4850e70854512c86bb9d0ee5227eb106d7810e24a3d"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.004202 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.009298 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" podStartSLOduration=5.416831434 podStartE2EDuration="41.009281491s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.647356433 +0000 UTC m=+998.146944385" lastFinishedPulling="2026-01-29 08:55:53.23980648 +0000 UTC m=+1033.739394442" observedRunningTime="2026-01-29 08:55:56.004361537 +0000 UTC m=+1036.503949489" watchObservedRunningTime="2026-01-29 08:55:56.009281491 +0000 UTC m=+1036.508869443" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.020670 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" event={"ID":"8a42f832-5088-4110-a8a9-cc3203ea4677","Type":"ContainerStarted","Data":"ec6f5d9546df62d671b953bdb77f9d8ae089932283a7246e026c44878c1525d7"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.021028 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.028272 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" event={"ID":"9d7a2eca-248d-464e-b698-5f4daee374d3","Type":"ContainerStarted","Data":"51ffc374a0357fbc63e2682daba54523ee1c9c9c6c8a61674689803b02eff593"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.029045 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.051006 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" podStartSLOduration=5.656481056 podStartE2EDuration="42.050992047s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.118141753 +0000 UTC m=+997.617729705" lastFinishedPulling="2026-01-29 08:55:53.512652744 +0000 UTC m=+1034.012240696" observedRunningTime="2026-01-29 08:55:56.050414291 +0000 UTC m=+1036.550002243" watchObservedRunningTime="2026-01-29 08:55:56.050992047 +0000 UTC m=+1036.550579999" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.052663 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" event={"ID":"b7af41a8-c82f-4e03-b775-ad36d931b8c5","Type":"ContainerStarted","Data":"1ff6fd280dfa036eb3110ba478f44eedb28e11a47c1ecbe1049ef46ef0974bb1"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.053404 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.054442 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" event={"ID":"f2eaf23b-b589-4c35-bb14-28a1aa1d9099","Type":"ContainerStarted","Data":"02353e082e9949a6e9a8f6a331887e3081aae3ddfe15c2b10a1751783642d3d5"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.054605 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.072159 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" event={"ID":"fef04ed6-9416-4599-a960-cde56635da29","Type":"ContainerStarted","Data":"5d6d62629dca94dc33fc202f00a014eab0b3dc6388752338c40d82ff8ebf30f2"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.072835 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.087151 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" podStartSLOduration=5.492380703 podStartE2EDuration="41.087134122s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.645847932 +0000 UTC m=+998.145435884" lastFinishedPulling="2026-01-29 08:55:53.240601351 +0000 UTC m=+1033.740189303" observedRunningTime="2026-01-29 08:55:56.081785066 +0000 UTC m=+1036.581373018" watchObservedRunningTime="2026-01-29 08:55:56.087134122 +0000 UTC m=+1036.586722074" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.097299 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" event={"ID":"59d726a8-dfae-47c6-a479-682b32601f3b","Type":"ContainerStarted","Data":"06635815d43a34b920594260eda766496a77a9671cf59400986568712c651598"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.097750 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.171949 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" event={"ID":"3fb6584b-e21d-4c41-af40-6099ceda26fe","Type":"ContainerStarted","Data":"39edd1ea51be8c10eb1b02edbf49ea6da3e277bbe5f4426f5c806035666a3cbb"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.173126 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.182787 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" podStartSLOduration=41.182765828 podStartE2EDuration="41.182765828s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:55:56.177503275 +0000 UTC m=+1036.677091227" watchObservedRunningTime="2026-01-29 08:55:56.182765828 +0000 UTC m=+1036.682353780" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.183453 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" event={"ID":"3828c08a-7f8d-4d56-8aad-9fb6a7ce294a","Type":"ContainerStarted","Data":"0565ad379f1a2addb72baa1e238f8a6fd9925dcd41dd7f3b8c9be904f2d17024"} Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.183855 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.185387 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.274727 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" podStartSLOduration=6.301911973 podStartE2EDuration="42.274670052s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.543544614 +0000 UTC m=+998.043132566" lastFinishedPulling="2026-01-29 08:55:53.516302693 +0000 UTC m=+1034.015890645" observedRunningTime="2026-01-29 08:55:56.248679654 +0000 UTC m=+1036.748267606" watchObservedRunningTime="2026-01-29 08:55:56.274670052 +0000 UTC m=+1036.774258004" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.303217 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" podStartSLOduration=5.452200448 podStartE2EDuration="41.303171659s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.630062242 +0000 UTC m=+998.129650194" lastFinishedPulling="2026-01-29 08:55:53.481033453 +0000 UTC m=+1033.980621405" observedRunningTime="2026-01-29 08:55:56.301463972 +0000 UTC m=+1036.801051944" watchObservedRunningTime="2026-01-29 08:55:56.303171659 +0000 UTC m=+1036.802759621" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.555241 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" podStartSLOduration=6.046436 podStartE2EDuration="41.555206577s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.644196387 +0000 UTC m=+998.143784349" lastFinishedPulling="2026-01-29 08:55:53.152966974 +0000 UTC m=+1033.652554926" observedRunningTime="2026-01-29 08:55:56.271786153 +0000 UTC m=+1036.771374125" watchObservedRunningTime="2026-01-29 08:55:56.555206577 +0000 UTC m=+1037.054794529" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.735799 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" podStartSLOduration=5.898775568 podStartE2EDuration="42.735783556s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:16.644997211 +0000 UTC m=+997.144585163" lastFinishedPulling="2026-01-29 08:55:53.482005199 +0000 UTC m=+1033.981593151" observedRunningTime="2026-01-29 08:55:56.734691627 +0000 UTC m=+1037.234279589" watchObservedRunningTime="2026-01-29 08:55:56.735783556 +0000 UTC m=+1037.235371508" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.834561 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" podStartSLOduration=21.825322425 podStartE2EDuration="42.834544777s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.215092194 +0000 UTC m=+997.714680146" lastFinishedPulling="2026-01-29 08:55:38.224314546 +0000 UTC m=+1018.723902498" observedRunningTime="2026-01-29 08:55:56.833624502 +0000 UTC m=+1037.333212464" watchObservedRunningTime="2026-01-29 08:55:56.834544777 +0000 UTC m=+1037.334132729" Jan 29 08:55:56 crc kubenswrapper[5031]: I0129 08:55:56.872650 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" podStartSLOduration=6.002360958 podStartE2EDuration="41.872629315s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.630468062 +0000 UTC m=+998.130056014" lastFinishedPulling="2026-01-29 08:55:53.500736419 +0000 UTC m=+1034.000324371" observedRunningTime="2026-01-29 08:55:56.868431701 +0000 UTC m=+1037.368019653" watchObservedRunningTime="2026-01-29 08:55:56.872629315 +0000 UTC m=+1037.372217267" Jan 29 08:55:57 crc kubenswrapper[5031]: I0129 08:55:57.000255 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" podStartSLOduration=7.048148016 podStartE2EDuration="43.000237182s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.559112778 +0000 UTC m=+998.058700730" lastFinishedPulling="2026-01-29 08:55:53.511201944 +0000 UTC m=+1034.010789896" observedRunningTime="2026-01-29 08:55:56.998666259 +0000 UTC m=+1037.498254211" watchObservedRunningTime="2026-01-29 08:55:57.000237182 +0000 UTC m=+1037.499825134" Jan 29 08:55:57 crc kubenswrapper[5031]: I0129 08:55:57.003047 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" podStartSLOduration=6.810418658 podStartE2EDuration="43.003033948s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.246641924 +0000 UTC m=+997.746229886" lastFinishedPulling="2026-01-29 08:55:53.439257224 +0000 UTC m=+1033.938845176" observedRunningTime="2026-01-29 08:55:56.976559967 +0000 UTC m=+1037.476147919" watchObservedRunningTime="2026-01-29 08:55:57.003033948 +0000 UTC m=+1037.502621900" Jan 29 08:56:00 crc kubenswrapper[5031]: I0129 08:56:00.366259 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" podStartSLOduration=24.121612545 podStartE2EDuration="46.366237949s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:16.896928085 +0000 UTC m=+997.396516037" lastFinishedPulling="2026-01-29 08:55:39.141553479 +0000 UTC m=+1019.641141441" observedRunningTime="2026-01-29 08:55:57.035219735 +0000 UTC m=+1037.534807697" watchObservedRunningTime="2026-01-29 08:56:00.366237949 +0000 UTC m=+1040.865825901" Jan 29 08:56:02 crc kubenswrapper[5031]: I0129 08:56:02.390302 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" event={"ID":"5925efab-b140-47f9-9b05-309973965161","Type":"ContainerStarted","Data":"8a4bdc72fef52d29a0dd8e9346fdbdc1a40baf7f075dffc0ed05a94aa6794362"} Jan 29 08:56:02 crc kubenswrapper[5031]: I0129 08:56:02.391836 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:56:02 crc kubenswrapper[5031]: I0129 08:56:02.391992 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" event={"ID":"5b5b3ff2-7c9d-412e-8eef-a203c3096694","Type":"ContainerStarted","Data":"634bc57c98c8106cf15626275b1657b2ea6097e05b50066d21472c1972c283d1"} Jan 29 08:56:02 crc kubenswrapper[5031]: I0129 08:56:02.392167 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:56:02 crc kubenswrapper[5031]: I0129 08:56:02.447232 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" podStartSLOduration=40.584058679 podStartE2EDuration="48.447208852s" podCreationTimestamp="2026-01-29 08:55:14 +0000 UTC" firstStartedPulling="2026-01-29 08:55:53.731435656 +0000 UTC m=+1034.231023608" lastFinishedPulling="2026-01-29 08:56:01.594585829 +0000 UTC m=+1042.094173781" observedRunningTime="2026-01-29 08:56:02.442285617 +0000 UTC m=+1042.941873589" watchObservedRunningTime="2026-01-29 08:56:02.447208852 +0000 UTC m=+1042.946796814" Jan 29 08:56:02 crc kubenswrapper[5031]: I0129 08:56:02.449538 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" podStartSLOduration=40.389888493 podStartE2EDuration="47.449522554s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:54.530050684 +0000 UTC m=+1035.029638636" lastFinishedPulling="2026-01-29 08:56:01.589684745 +0000 UTC m=+1042.089272697" observedRunningTime="2026-01-29 08:56:02.426864407 +0000 UTC m=+1042.926452389" watchObservedRunningTime="2026-01-29 08:56:02.449522554 +0000 UTC m=+1042.949110506" Jan 29 08:56:04 crc kubenswrapper[5031]: E0129 08:56:04.285444 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" podUID="c3b8b573-36e5-48c9-bfb5-adff7608c393" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.158454 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-mppwm" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.193833 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-f5hc7" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.237412 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7857f788f-x5hq5" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.468237 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-6pqwq" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.624656 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-tt4jw" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.657841 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ftmh8" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.712724 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-765668569f-9nxrk" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.719728 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-r6hlv" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.768560 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-958664b5-tpj2j" Jan 29 08:56:05 crc kubenswrapper[5031]: I0129 08:56:05.928329 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-zhkh2" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.037073 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-ltbs2" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.088113 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-hhbpv" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.133240 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-b6c99d9c5-pppjk" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.215393 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-fn2tc" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.276219 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-46js4" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.279614 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-684f4d697d-h5vhw" Jan 29 08:56:06 crc kubenswrapper[5031]: E0129 08:56:06.284431 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" podUID="b8416e4f-a2ee-46c8-90ff-2ed68301825e" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.391023 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-vt2wm" Jan 29 08:56:06 crc kubenswrapper[5031]: I0129 08:56:06.398940 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-tgkd9" Jan 29 08:56:07 crc kubenswrapper[5031]: I0129 08:56:07.707981 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp" Jan 29 08:56:07 crc kubenswrapper[5031]: I0129 08:56:07.946113 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7fd9db8655-wjbcx" Jan 29 08:56:11 crc kubenswrapper[5031]: I0129 08:56:11.347784 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8dpt8" Jan 29 08:56:16 crc kubenswrapper[5031]: I0129 08:56:16.284639 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:56:18 crc kubenswrapper[5031]: I0129 08:56:18.707904 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" event={"ID":"c3b8b573-36e5-48c9-bfb5-adff7608c393","Type":"ContainerStarted","Data":"49b18add141bf8c7f44e9c165353ecbe5b5c3238e43964f62b0d35f4cddd7a24"} Jan 29 08:56:18 crc kubenswrapper[5031]: I0129 08:56:18.710026 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" event={"ID":"b8416e4f-a2ee-46c8-90ff-2ed68301825e","Type":"ContainerStarted","Data":"1e45f57faed50390ccd788be5451dc589840837b020c1121009ca9bf3cc143ef"} Jan 29 08:56:18 crc kubenswrapper[5031]: I0129 08:56:18.710287 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" Jan 29 08:56:18 crc kubenswrapper[5031]: I0129 08:56:18.780652 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rwmm7" podStartSLOduration=2.669523355 podStartE2EDuration="1m2.780631126s" podCreationTimestamp="2026-01-29 08:55:16 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.649127461 +0000 UTC m=+998.148715413" lastFinishedPulling="2026-01-29 08:56:17.760235232 +0000 UTC m=+1058.259823184" observedRunningTime="2026-01-29 08:56:18.747232336 +0000 UTC m=+1059.246820288" watchObservedRunningTime="2026-01-29 08:56:18.780631126 +0000 UTC m=+1059.280219078" Jan 29 08:56:18 crc kubenswrapper[5031]: I0129 08:56:18.781735 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" podStartSLOduration=3.433434432 podStartE2EDuration="1m3.781723946s" podCreationTimestamp="2026-01-29 08:55:15 +0000 UTC" firstStartedPulling="2026-01-29 08:55:17.644092834 +0000 UTC m=+998.143680786" lastFinishedPulling="2026-01-29 08:56:17.992382348 +0000 UTC m=+1058.491970300" observedRunningTime="2026-01-29 08:56:18.774685704 +0000 UTC m=+1059.274273656" watchObservedRunningTime="2026-01-29 08:56:18.781723946 +0000 UTC m=+1059.281311898" Jan 29 08:56:25 crc kubenswrapper[5031]: I0129 08:56:25.975805 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6hd46" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.771130 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4ftqb"] Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.773729 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.776602 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-km5ws" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.776897 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.777006 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.777422 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.798779 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4ftqb"] Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.805745 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-config\") pod \"dnsmasq-dns-675f4bcbfc-4ftqb\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.805828 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747hb\" (UniqueName: \"kubernetes.io/projected/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-kube-api-access-747hb\") pod \"dnsmasq-dns-675f4bcbfc-4ftqb\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.858500 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56v2s"] Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.859858 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.872220 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.877550 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56v2s"] Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.907244 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-config\") pod \"dnsmasq-dns-675f4bcbfc-4ftqb\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.907331 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-747hb\" (UniqueName: \"kubernetes.io/projected/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-kube-api-access-747hb\") pod \"dnsmasq-dns-675f4bcbfc-4ftqb\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.908799 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-config\") pod \"dnsmasq-dns-675f4bcbfc-4ftqb\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:40 crc kubenswrapper[5031]: I0129 08:56:40.929338 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-747hb\" (UniqueName: \"kubernetes.io/projected/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-kube-api-access-747hb\") pod \"dnsmasq-dns-675f4bcbfc-4ftqb\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.008524 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b49z\" (UniqueName: \"kubernetes.io/projected/530b09b8-2d95-4af4-9643-90b880b0eb45-kube-api-access-5b49z\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.008577 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-config\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.008653 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.091250 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.109553 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.109648 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b49z\" (UniqueName: \"kubernetes.io/projected/530b09b8-2d95-4af4-9643-90b880b0eb45-kube-api-access-5b49z\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.109673 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-config\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.110399 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-config\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.110460 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.137934 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b49z\" (UniqueName: \"kubernetes.io/projected/530b09b8-2d95-4af4-9643-90b880b0eb45-kube-api-access-5b49z\") pod \"dnsmasq-dns-78dd6ddcc-56v2s\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.188820 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.525013 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4ftqb"] Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.863326 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56v2s"] Jan 29 08:56:41 crc kubenswrapper[5031]: W0129 08:56:41.869656 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod530b09b8_2d95_4af4_9643_90b880b0eb45.slice/crio-16b44f1d47a5e04cf6077caf7b4215fff820697ec752390976646d10d76faae5 WatchSource:0}: Error finding container 16b44f1d47a5e04cf6077caf7b4215fff820697ec752390976646d10d76faae5: Status 404 returned error can't find the container with id 16b44f1d47a5e04cf6077caf7b4215fff820697ec752390976646d10d76faae5 Jan 29 08:56:41 crc kubenswrapper[5031]: I0129 08:56:41.870354 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" event={"ID":"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74","Type":"ContainerStarted","Data":"7f5225a44bdde2051182215a45973b3c6074d78508babb91bcc0f5962dda1005"} Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.131201 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" event={"ID":"530b09b8-2d95-4af4-9643-90b880b0eb45","Type":"ContainerStarted","Data":"16b44f1d47a5e04cf6077caf7b4215fff820697ec752390976646d10d76faae5"} Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.770624 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4ftqb"] Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.802866 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdc5j"] Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.810238 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.818700 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.818794 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjk2\" (UniqueName: \"kubernetes.io/projected/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-kube-api-access-tgjk2\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.818844 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-config\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.871656 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdc5j"] Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.921413 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.923782 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.923890 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgjk2\" (UniqueName: \"kubernetes.io/projected/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-kube-api-access-tgjk2\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.923946 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-config\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.924810 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-config\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:43 crc kubenswrapper[5031]: I0129 08:56:43.957442 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgjk2\" (UniqueName: \"kubernetes.io/projected/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-kube-api-access-tgjk2\") pod \"dnsmasq-dns-666b6646f7-xdc5j\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.134067 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56v2s"] Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.162884 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.163488 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m48cg"] Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.166165 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.194380 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m48cg"] Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.238342 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spl5k\" (UniqueName: \"kubernetes.io/projected/49de847e-bdf9-48ea-8d36-e08b4b696a22-kube-api-access-spl5k\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.238486 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-config\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.238565 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.340172 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.340320 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spl5k\" (UniqueName: \"kubernetes.io/projected/49de847e-bdf9-48ea-8d36-e08b4b696a22-kube-api-access-spl5k\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.340383 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-config\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.342088 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-config\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.342328 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.365873 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spl5k\" (UniqueName: \"kubernetes.io/projected/49de847e-bdf9-48ea-8d36-e08b4b696a22-kube-api-access-spl5k\") pod \"dnsmasq-dns-57d769cc4f-m48cg\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.498325 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:56:44 crc kubenswrapper[5031]: I0129 08:56:44.691279 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdc5j"] Jan 29 08:56:44 crc kubenswrapper[5031]: W0129 08:56:44.729692 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ebe3572_e2ba_4a69_b7be_19ad9ee6834b.slice/crio-83321b7015d98e0dee3b1be2bf5bce4d68d18e102011c08e3361cd0b5a8086ad WatchSource:0}: Error finding container 83321b7015d98e0dee3b1be2bf5bce4d68d18e102011c08e3361cd0b5a8086ad: Status 404 returned error can't find the container with id 83321b7015d98e0dee3b1be2bf5bce4d68d18e102011c08e3361cd0b5a8086ad Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.010074 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.011999 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.016745 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.016748 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.016944 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.017110 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.017307 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.017492 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bkbtw" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.017740 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.029097 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084708 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084760 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084787 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084809 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084841 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084874 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084890 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmscd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-kube-api-access-fmscd\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084906 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084931 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084952 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.084977 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-config-data\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.128602 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m48cg"] Jan 29 08:56:45 crc kubenswrapper[5031]: W0129 08:56:45.150572 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49de847e_bdf9_48ea_8d36_e08b4b696a22.slice/crio-a05cc8dcf10e863b2dcb9e65a5870a507fbf10a454ce06e362d3e0b53163e8c6 WatchSource:0}: Error finding container a05cc8dcf10e863b2dcb9e65a5870a507fbf10a454ce06e362d3e0b53163e8c6: Status 404 returned error can't find the container with id a05cc8dcf10e863b2dcb9e65a5870a507fbf10a454ce06e362d3e0b53163e8c6 Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.164875 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" event={"ID":"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b","Type":"ContainerStarted","Data":"83321b7015d98e0dee3b1be2bf5bce4d68d18e102011c08e3361cd0b5a8086ad"} Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.166748 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" event={"ID":"49de847e-bdf9-48ea-8d36-e08b4b696a22","Type":"ContainerStarted","Data":"a05cc8dcf10e863b2dcb9e65a5870a507fbf10a454ce06e362d3e0b53163e8c6"} Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.185805 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.186720 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-config-data\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.186983 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187102 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187353 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187399 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187625 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187794 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187821 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmscd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-kube-api-access-fmscd\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187841 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187873 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187901 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.187927 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-config-data\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.188809 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.189038 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.190292 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.192731 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.197974 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.201975 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.207692 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.226956 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmscd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-kube-api-access-fmscd\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.230877 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.241502 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.403356 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.404567 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.409335 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.409855 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.410031 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.410307 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.410390 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.410568 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.410718 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.410894 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wdwz4" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.424972 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597324 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e34c17-fba9-4efa-8912-ede69c516560-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597362 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597401 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e34c17-fba9-4efa-8912-ede69c516560-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597542 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqkgd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-kube-api-access-nqkgd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597583 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597750 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597819 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597865 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597895 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.597956 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.598009 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792046 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e34c17-fba9-4efa-8912-ede69c516560-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792139 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792183 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e34c17-fba9-4efa-8912-ede69c516560-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792215 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792239 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqkgd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-kube-api-access-nqkgd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792306 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792332 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792359 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792395 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792449 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.792473 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.793932 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.794445 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.794530 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.795147 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.800302 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.801541 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.840021 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.978591 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e34c17-fba9-4efa-8912-ede69c516560-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.979414 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqkgd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-kube-api-access-nqkgd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.980594 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e34c17-fba9-4efa-8912-ede69c516560-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.982049 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:45 crc kubenswrapper[5031]: I0129 08:56:45.983153 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.033035 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.462456 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.750725 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.755117 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.767489 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hztqz" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.775204 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.776038 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.813255 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.813619 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.885204 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.919332 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cd5g\" (UniqueName: \"kubernetes.io/projected/33700928-aca8-42c5-83f7-a57572d399aa-kube-api-access-2cd5g\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.919665 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33700928-aca8-42c5-83f7-a57572d399aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.919703 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.919723 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.919750 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33700928-aca8-42c5-83f7-a57572d399aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.921114 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.921138 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:46 crc kubenswrapper[5031]: I0129 08:56:46.921159 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33700928-aca8-42c5-83f7-a57572d399aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022781 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33700928-aca8-42c5-83f7-a57572d399aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022846 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022868 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022893 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33700928-aca8-42c5-83f7-a57572d399aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022916 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022935 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022955 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33700928-aca8-42c5-83f7-a57572d399aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.022981 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cd5g\" (UniqueName: \"kubernetes.io/projected/33700928-aca8-42c5-83f7-a57572d399aa-kube-api-access-2cd5g\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.023649 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33700928-aca8-42c5-83f7-a57572d399aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.024465 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.025403 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.025803 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.026240 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33700928-aca8-42c5-83f7-a57572d399aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.035515 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33700928-aca8-42c5-83f7-a57572d399aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.042928 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33700928-aca8-42c5-83f7-a57572d399aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.052652 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cd5g\" (UniqueName: \"kubernetes.io/projected/33700928-aca8-42c5-83f7-a57572d399aa-kube-api-access-2cd5g\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.074691 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"33700928-aca8-42c5-83f7-a57572d399aa\") " pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.211824 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.225787 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64621a94-8b58-4593-a9d0-58f0dd3c5e0f","Type":"ContainerStarted","Data":"572ee2637e3e4264d635a98edef3a7809ff321b7540668f27dbe885820462cfc"} Jan 29 08:56:47 crc kubenswrapper[5031]: I0129 08:56:47.332147 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:56:47 crc kubenswrapper[5031]: W0129 08:56:47.360687 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9e34c17_fba9_4efa_8912_ede69c516560.slice/crio-f17d93ce9752ba78dc25a03a48306c6a9300af971fdd648836fba60b20f4588b WatchSource:0}: Error finding container f17d93ce9752ba78dc25a03a48306c6a9300af971fdd648836fba60b20f4588b: Status 404 returned error can't find the container with id f17d93ce9752ba78dc25a03a48306c6a9300af971fdd648836fba60b20f4588b Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.095833 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.099160 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.103692 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6j2hs" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.108463 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.109931 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.108869 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.109102 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203169 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7149ef7-171a-48eb-a13a-af1982b4fbb1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203232 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7149ef7-171a-48eb-a13a-af1982b4fbb1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203284 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203318 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203340 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203426 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7149ef7-171a-48eb-a13a-af1982b4fbb1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203449 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.203476 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kzl\" (UniqueName: \"kubernetes.io/projected/a7149ef7-171a-48eb-a13a-af1982b4fbb1-kube-api-access-56kzl\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.249206 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.250358 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.253802 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qqw89" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.261666 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.261717 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.266380 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.273889 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9e34c17-fba9-4efa-8912-ede69c516560","Type":"ContainerStarted","Data":"f17d93ce9752ba78dc25a03a48306c6a9300af971fdd648836fba60b20f4588b"} Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.304640 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7149ef7-171a-48eb-a13a-af1982b4fbb1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.304694 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.304735 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56kzl\" (UniqueName: \"kubernetes.io/projected/a7149ef7-171a-48eb-a13a-af1982b4fbb1-kube-api-access-56kzl\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.304850 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7149ef7-171a-48eb-a13a-af1982b4fbb1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.304890 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7149ef7-171a-48eb-a13a-af1982b4fbb1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.304923 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.305145 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.305169 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.305471 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.306342 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7149ef7-171a-48eb-a13a-af1982b4fbb1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.307902 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.309097 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.309175 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7149ef7-171a-48eb-a13a-af1982b4fbb1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.313823 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7149ef7-171a-48eb-a13a-af1982b4fbb1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.313855 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7149ef7-171a-48eb-a13a-af1982b4fbb1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.320337 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.337791 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56kzl\" (UniqueName: \"kubernetes.io/projected/a7149ef7-171a-48eb-a13a-af1982b4fbb1-kube-api-access-56kzl\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.346680 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7149ef7-171a-48eb-a13a-af1982b4fbb1\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.407116 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7411c3e7-5370-4bc2-85b8-aa1a137d948b-kolla-config\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.407301 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7411c3e7-5370-4bc2-85b8-aa1a137d948b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.407339 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7411c3e7-5370-4bc2-85b8-aa1a137d948b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.407421 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcg4g\" (UniqueName: \"kubernetes.io/projected/7411c3e7-5370-4bc2-85b8-aa1a137d948b-kube-api-access-rcg4g\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.407533 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7411c3e7-5370-4bc2-85b8-aa1a137d948b-config-data\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.430234 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.510348 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7411c3e7-5370-4bc2-85b8-aa1a137d948b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.510413 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7411c3e7-5370-4bc2-85b8-aa1a137d948b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.510446 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcg4g\" (UniqueName: \"kubernetes.io/projected/7411c3e7-5370-4bc2-85b8-aa1a137d948b-kube-api-access-rcg4g\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.510492 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7411c3e7-5370-4bc2-85b8-aa1a137d948b-config-data\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.510541 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7411c3e7-5370-4bc2-85b8-aa1a137d948b-kolla-config\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.511493 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7411c3e7-5370-4bc2-85b8-aa1a137d948b-kolla-config\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.512763 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7411c3e7-5370-4bc2-85b8-aa1a137d948b-config-data\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.527421 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7411c3e7-5370-4bc2-85b8-aa1a137d948b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.534615 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcg4g\" (UniqueName: \"kubernetes.io/projected/7411c3e7-5370-4bc2-85b8-aa1a137d948b-kube-api-access-rcg4g\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.543028 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7411c3e7-5370-4bc2-85b8-aa1a137d948b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7411c3e7-5370-4bc2-85b8-aa1a137d948b\") " pod="openstack/memcached-0" Jan 29 08:56:48 crc kubenswrapper[5031]: I0129 08:56:48.588477 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 08:56:49 crc kubenswrapper[5031]: I0129 08:56:49.239581 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:56:49 crc kubenswrapper[5031]: I0129 08:56:49.304095 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33700928-aca8-42c5-83f7-a57572d399aa","Type":"ContainerStarted","Data":"d1b9375378b95af6c1c2bba527e609264c5d01cfe629fdfb8e635b3b937c377c"} Jan 29 08:56:49 crc kubenswrapper[5031]: I0129 08:56:49.328495 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 08:56:50 crc kubenswrapper[5031]: I0129 08:56:50.973502 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7149ef7-171a-48eb-a13a-af1982b4fbb1","Type":"ContainerStarted","Data":"ec8ab2d2143e67586ca58f5c2d7664437b1b82da784d71a0a6be65cf5f69519e"} Jan 29 08:56:50 crc kubenswrapper[5031]: I0129 08:56:50.974866 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7411c3e7-5370-4bc2-85b8-aa1a137d948b","Type":"ContainerStarted","Data":"f14452a6105b396b4fa5b45dd3cce52459a1a17a377d1650453cfc47ac1e235c"} Jan 29 08:56:50 crc kubenswrapper[5031]: I0129 08:56:50.975024 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:56:50 crc kubenswrapper[5031]: I0129 08:56:50.976793 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:56:50 crc kubenswrapper[5031]: I0129 08:56:50.977567 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:56:50 crc kubenswrapper[5031]: I0129 08:56:50.978700 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vbmrc" Jan 29 08:56:51 crc kubenswrapper[5031]: I0129 08:56:51.085784 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5r6v\" (UniqueName: \"kubernetes.io/projected/6c528f35-8b42-42a9-9e47-9aee6ba624f5-kube-api-access-w5r6v\") pod \"kube-state-metrics-0\" (UID: \"6c528f35-8b42-42a9-9e47-9aee6ba624f5\") " pod="openstack/kube-state-metrics-0" Jan 29 08:56:51 crc kubenswrapper[5031]: I0129 08:56:51.189145 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5r6v\" (UniqueName: \"kubernetes.io/projected/6c528f35-8b42-42a9-9e47-9aee6ba624f5-kube-api-access-w5r6v\") pod \"kube-state-metrics-0\" (UID: \"6c528f35-8b42-42a9-9e47-9aee6ba624f5\") " pod="openstack/kube-state-metrics-0" Jan 29 08:56:51 crc kubenswrapper[5031]: I0129 08:56:51.240510 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5r6v\" (UniqueName: \"kubernetes.io/projected/6c528f35-8b42-42a9-9e47-9aee6ba624f5-kube-api-access-w5r6v\") pod \"kube-state-metrics-0\" (UID: \"6c528f35-8b42-42a9-9e47-9aee6ba624f5\") " pod="openstack/kube-state-metrics-0" Jan 29 08:56:51 crc kubenswrapper[5031]: I0129 08:56:51.326749 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:56:52 crc kubenswrapper[5031]: I0129 08:56:52.405213 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.847112 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-z6mp7"] Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.862304 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.864657 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lmq4s"] Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.864857 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5vgdg" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.866804 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.866855 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.868381 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.876684 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-z6mp7"] Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.894613 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lmq4s"] Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954063 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-combined-ca-bundle\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954106 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-ovn-controller-tls-certs\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954126 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d10ff314-d9a8-43bc-a0ad-c821e181b328-scripts\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954147 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-run\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954166 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4xw\" (UniqueName: \"kubernetes.io/projected/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-kube-api-access-4b4xw\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954182 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-etc-ovs\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954210 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-lib\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954225 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-run\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954260 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-run-ovn\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954278 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-scripts\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954299 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-log-ovn\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954347 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr6ff\" (UniqueName: \"kubernetes.io/projected/d10ff314-d9a8-43bc-a0ad-c821e181b328-kube-api-access-hr6ff\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:54 crc kubenswrapper[5031]: I0129 08:56:54.954444 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-log\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056592 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr6ff\" (UniqueName: \"kubernetes.io/projected/d10ff314-d9a8-43bc-a0ad-c821e181b328-kube-api-access-hr6ff\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056670 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-log\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056706 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-combined-ca-bundle\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056746 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-ovn-controller-tls-certs\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056768 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d10ff314-d9a8-43bc-a0ad-c821e181b328-scripts\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056793 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-run\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056829 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4xw\" (UniqueName: \"kubernetes.io/projected/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-kube-api-access-4b4xw\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056847 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-etc-ovs\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056889 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-lib\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056919 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-run\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.056996 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-run-ovn\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057035 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-scripts\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057067 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-log-ovn\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057357 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-etc-ovs\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057460 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-run\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057461 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-run\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057509 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-log\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057629 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d10ff314-d9a8-43bc-a0ad-c821e181b328-var-lib\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.057793 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-log-ovn\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.058285 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-var-run-ovn\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.059460 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-scripts\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.074165 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d10ff314-d9a8-43bc-a0ad-c821e181b328-scripts\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.076570 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr6ff\" (UniqueName: \"kubernetes.io/projected/d10ff314-d9a8-43bc-a0ad-c821e181b328-kube-api-access-hr6ff\") pod \"ovn-controller-ovs-lmq4s\" (UID: \"d10ff314-d9a8-43bc-a0ad-c821e181b328\") " pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.078003 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-combined-ca-bundle\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.093618 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-ovn-controller-tls-certs\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.097998 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4xw\" (UniqueName: \"kubernetes.io/projected/b34fd049-3d7e-4d5d-acfc-8e4c450bf857-kube-api-access-4b4xw\") pod \"ovn-controller-z6mp7\" (UID: \"b34fd049-3d7e-4d5d-acfc-8e4c450bf857\") " pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.198827 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.288254 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.397940 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.402887 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.406638 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.412388 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.412486 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.412681 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.412895 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.412919 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wpq7w" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493237 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493527 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493568 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11c52100-0b09-4377-b50e-84c78d3ddf74-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493613 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c52100-0b09-4377-b50e-84c78d3ddf74-config\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493637 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gwh\" (UniqueName: \"kubernetes.io/projected/11c52100-0b09-4377-b50e-84c78d3ddf74-kube-api-access-w6gwh\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493754 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493840 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.493881 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11c52100-0b09-4377-b50e-84c78d3ddf74-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605088 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11c52100-0b09-4377-b50e-84c78d3ddf74-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605160 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605183 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605209 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11c52100-0b09-4377-b50e-84c78d3ddf74-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605247 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c52100-0b09-4377-b50e-84c78d3ddf74-config\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605268 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gwh\" (UniqueName: \"kubernetes.io/projected/11c52100-0b09-4377-b50e-84c78d3ddf74-kube-api-access-w6gwh\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605349 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605393 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.605781 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.607147 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11c52100-0b09-4377-b50e-84c78d3ddf74-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.608673 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c52100-0b09-4377-b50e-84c78d3ddf74-config\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.610942 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.612803 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11c52100-0b09-4377-b50e-84c78d3ddf74-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.617320 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.624967 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c52100-0b09-4377-b50e-84c78d3ddf74-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.638515 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:55 crc kubenswrapper[5031]: I0129 08:56:55.839408 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gwh\" (UniqueName: \"kubernetes.io/projected/11c52100-0b09-4377-b50e-84c78d3ddf74-kube-api-access-w6gwh\") pod \"ovsdbserver-nb-0\" (UID: \"11c52100-0b09-4377-b50e-84c78d3ddf74\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:56 crc kubenswrapper[5031]: I0129 08:56:56.027923 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.607266 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.609330 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.612267 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-gdtm5" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.613119 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.613309 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.615277 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.632155 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.788712 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.788776 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ad1ce96-1373-407b-b4ec-700934ef6ac4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.788820 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad1ce96-1373-407b-b4ec-700934ef6ac4-config\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.788844 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmg7z\" (UniqueName: \"kubernetes.io/projected/0ad1ce96-1373-407b-b4ec-700934ef6ac4-kube-api-access-rmg7z\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.788866 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad1ce96-1373-407b-b4ec-700934ef6ac4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.788884 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.788977 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.789004 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893443 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893508 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ad1ce96-1373-407b-b4ec-700934ef6ac4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893567 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad1ce96-1373-407b-b4ec-700934ef6ac4-config\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893597 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmg7z\" (UniqueName: \"kubernetes.io/projected/0ad1ce96-1373-407b-b4ec-700934ef6ac4-kube-api-access-rmg7z\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893620 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad1ce96-1373-407b-b4ec-700934ef6ac4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893637 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893710 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.893746 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.895040 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ad1ce96-1373-407b-b4ec-700934ef6ac4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.895815 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad1ce96-1373-407b-b4ec-700934ef6ac4-config\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.896077 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.896529 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad1ce96-1373-407b-b4ec-700934ef6ac4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.901787 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.913274 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmg7z\" (UniqueName: \"kubernetes.io/projected/0ad1ce96-1373-407b-b4ec-700934ef6ac4-kube-api-access-rmg7z\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.914277 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.916073 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad1ce96-1373-407b-b4ec-700934ef6ac4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:57 crc kubenswrapper[5031]: I0129 08:56:57.958217 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0ad1ce96-1373-407b-b4ec-700934ef6ac4\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:56:58 crc kubenswrapper[5031]: I0129 08:56:58.463298 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 08:57:08 crc kubenswrapper[5031]: I0129 08:57:08.493935 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:57:08 crc kubenswrapper[5031]: I0129 08:57:08.494492 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:57:10 crc kubenswrapper[5031]: W0129 08:57:10.489761 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c528f35_8b42_42a9_9e47_9aee6ba624f5.slice/crio-76a43af5c6e673f052f461f5a584d5bbb4a31b233f9df5e2ac549dc8755c6f3f WatchSource:0}: Error finding container 76a43af5c6e673f052f461f5a584d5bbb4a31b233f9df5e2ac549dc8755c6f3f: Status 404 returned error can't find the container with id 76a43af5c6e673f052f461f5a584d5bbb4a31b233f9df5e2ac549dc8755c6f3f Jan 29 08:57:10 crc kubenswrapper[5031]: I0129 08:57:10.722750 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c528f35-8b42-42a9-9e47-9aee6ba624f5","Type":"ContainerStarted","Data":"76a43af5c6e673f052f461f5a584d5bbb4a31b233f9df5e2ac549dc8755c6f3f"} Jan 29 08:57:15 crc kubenswrapper[5031]: E0129 08:57:15.207795 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 08:57:15 crc kubenswrapper[5031]: E0129 08:57:15.208324 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56kzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(a7149ef7-171a-48eb-a13a-af1982b4fbb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:15 crc kubenswrapper[5031]: E0129 08:57:15.210224 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="a7149ef7-171a-48eb-a13a-af1982b4fbb1" Jan 29 08:57:15 crc kubenswrapper[5031]: E0129 08:57:15.846967 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="a7149ef7-171a-48eb-a13a-af1982b4fbb1" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.354505 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.354951 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5f5h688h5f6h579h7fh5bbh98hb4hd6h566h68chfdhb4h67dh548h98h85hd6h59fh5b9h66ch5f9h596h8bh656h5c8hf7hbfh94hd9h65bhbcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcg4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(7411c3e7-5370-4bc2-85b8-aa1a137d948b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.358212 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="7411c3e7-5370-4bc2-85b8-aa1a137d948b" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.371262 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.371499 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cd5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(33700928-aca8-42c5-83f7-a57572d399aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.373516 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="33700928-aca8-42c5-83f7-a57572d399aa" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.874566 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="33700928-aca8-42c5-83f7-a57572d399aa" Jan 29 08:57:19 crc kubenswrapper[5031]: E0129 08:57:19.875217 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="7411c3e7-5370-4bc2-85b8-aa1a137d948b" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.603502 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.603871 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmscd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(64621a94-8b58-4593-a9d0-58f0dd3c5e0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.605212 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.618683 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.618927 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqkgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(a9e34c17-fba9-4efa-8912-ede69c516560): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.620171 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.880382 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" Jan 29 08:57:20 crc kubenswrapper[5031]: E0129 08:57:20.880427 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" Jan 29 08:57:21 crc kubenswrapper[5031]: I0129 08:57:21.246434 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:57:25 crc kubenswrapper[5031]: I0129 08:57:25.599764 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lmq4s"] Jan 29 08:57:25 crc kubenswrapper[5031]: I0129 08:57:25.922169 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0ad1ce96-1373-407b-b4ec-700934ef6ac4","Type":"ContainerStarted","Data":"23da9ee4dd5daff91b7b22cbb85c768c0d4f72a17175fc52c3f1107090b60f10"} Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.076381 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.076938 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spl5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-m48cg_openstack(49de847e-bdf9-48ea-8d36-e08b4b696a22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.078117 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" podUID="49de847e-bdf9-48ea-8d36-e08b4b696a22" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.094174 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.094340 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-4ftqb_openstack(0a7aa386-6c19-4dfa-aed5-e521d4ac6d74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.095289 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.095420 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" podUID="0a7aa386-6c19-4dfa-aed5-e521d4ac6d74" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.095451 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b49z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-56v2s_openstack(530b09b8-2d95-4af4-9643-90b880b0eb45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.097556 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" podUID="530b09b8-2d95-4af4-9643-90b880b0eb45" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.119978 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.120143 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgjk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xdc5j_openstack(5ebe3572-e2ba-4a69-b7be-19ad9ee6834b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.121492 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" podUID="5ebe3572-e2ba-4a69-b7be-19ad9ee6834b" Jan 29 08:57:26 crc kubenswrapper[5031]: I0129 08:57:26.499822 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-z6mp7"] Jan 29 08:57:26 crc kubenswrapper[5031]: I0129 08:57:26.653057 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:57:26 crc kubenswrapper[5031]: I0129 08:57:26.937169 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7" event={"ID":"b34fd049-3d7e-4d5d-acfc-8e4c450bf857","Type":"ContainerStarted","Data":"1647d4052ad4221a3c5e09857e7b87c9507654f59f957ce4c497335271a0ac1c"} Jan 29 08:57:26 crc kubenswrapper[5031]: I0129 08:57:26.938396 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"11c52100-0b09-4377-b50e-84c78d3ddf74","Type":"ContainerStarted","Data":"c907a817d82644864c7b61cd1a09a2b5f6f757b18adfa8c90688f433bf62c564"} Jan 29 08:57:26 crc kubenswrapper[5031]: I0129 08:57:26.940120 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lmq4s" event={"ID":"d10ff314-d9a8-43bc-a0ad-c821e181b328","Type":"ContainerStarted","Data":"a8b4c5936e90d5e3ddc431fc212939e11ed8ba0682efc62665e9a94d4bca3286"} Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.942258 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" podUID="5ebe3572-e2ba-4a69-b7be-19ad9ee6834b" Jan 29 08:57:26 crc kubenswrapper[5031]: E0129 08:57:26.942458 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" podUID="49de847e-bdf9-48ea-8d36-e08b4b696a22" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.388720 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.396325 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.449869 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-dns-svc\") pod \"530b09b8-2d95-4af4-9643-90b880b0eb45\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.450461 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b49z\" (UniqueName: \"kubernetes.io/projected/530b09b8-2d95-4af4-9643-90b880b0eb45-kube-api-access-5b49z\") pod \"530b09b8-2d95-4af4-9643-90b880b0eb45\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.450527 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-747hb\" (UniqueName: \"kubernetes.io/projected/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-kube-api-access-747hb\") pod \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.450559 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-config\") pod \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\" (UID: \"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74\") " Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.450559 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "530b09b8-2d95-4af4-9643-90b880b0eb45" (UID: "530b09b8-2d95-4af4-9643-90b880b0eb45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.451582 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-config" (OuterVolumeSpecName: "config") pod "0a7aa386-6c19-4dfa-aed5-e521d4ac6d74" (UID: "0a7aa386-6c19-4dfa-aed5-e521d4ac6d74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.452249 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.452274 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.456391 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530b09b8-2d95-4af4-9643-90b880b0eb45-kube-api-access-5b49z" (OuterVolumeSpecName: "kube-api-access-5b49z") pod "530b09b8-2d95-4af4-9643-90b880b0eb45" (UID: "530b09b8-2d95-4af4-9643-90b880b0eb45"). InnerVolumeSpecName "kube-api-access-5b49z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.472267 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-kube-api-access-747hb" (OuterVolumeSpecName: "kube-api-access-747hb") pod "0a7aa386-6c19-4dfa-aed5-e521d4ac6d74" (UID: "0a7aa386-6c19-4dfa-aed5-e521d4ac6d74"). InnerVolumeSpecName "kube-api-access-747hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.553407 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-config\") pod \"530b09b8-2d95-4af4-9643-90b880b0eb45\" (UID: \"530b09b8-2d95-4af4-9643-90b880b0eb45\") " Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.553905 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-config" (OuterVolumeSpecName: "config") pod "530b09b8-2d95-4af4-9643-90b880b0eb45" (UID: "530b09b8-2d95-4af4-9643-90b880b0eb45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.554446 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/530b09b8-2d95-4af4-9643-90b880b0eb45-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.554472 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b49z\" (UniqueName: \"kubernetes.io/projected/530b09b8-2d95-4af4-9643-90b880b0eb45-kube-api-access-5b49z\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.554489 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-747hb\" (UniqueName: \"kubernetes.io/projected/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74-kube-api-access-747hb\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.949321 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.949307 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-56v2s" event={"ID":"530b09b8-2d95-4af4-9643-90b880b0eb45","Type":"ContainerDied","Data":"16b44f1d47a5e04cf6077caf7b4215fff820697ec752390976646d10d76faae5"} Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.950622 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" event={"ID":"0a7aa386-6c19-4dfa-aed5-e521d4ac6d74","Type":"ContainerDied","Data":"7f5225a44bdde2051182215a45973b3c6074d78508babb91bcc0f5962dda1005"} Jan 29 08:57:27 crc kubenswrapper[5031]: I0129 08:57:27.950706 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4ftqb" Jan 29 08:57:28 crc kubenswrapper[5031]: I0129 08:57:28.013311 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4ftqb"] Jan 29 08:57:28 crc kubenswrapper[5031]: I0129 08:57:28.022053 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4ftqb"] Jan 29 08:57:28 crc kubenswrapper[5031]: I0129 08:57:28.037327 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56v2s"] Jan 29 08:57:28 crc kubenswrapper[5031]: I0129 08:57:28.043509 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56v2s"] Jan 29 08:57:28 crc kubenswrapper[5031]: I0129 08:57:28.293885 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7aa386-6c19-4dfa-aed5-e521d4ac6d74" path="/var/lib/kubelet/pods/0a7aa386-6c19-4dfa-aed5-e521d4ac6d74/volumes" Jan 29 08:57:28 crc kubenswrapper[5031]: I0129 08:57:28.294514 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530b09b8-2d95-4af4-9643-90b880b0eb45" path="/var/lib/kubelet/pods/530b09b8-2d95-4af4-9643-90b880b0eb45/volumes" Jan 29 08:57:30 crc kubenswrapper[5031]: I0129 08:57:30.986078 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c528f35-8b42-42a9-9e47-9aee6ba624f5","Type":"ContainerStarted","Data":"8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03"} Jan 29 08:57:30 crc kubenswrapper[5031]: I0129 08:57:30.987006 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 08:57:30 crc kubenswrapper[5031]: I0129 08:57:30.988960 5031 generic.go:334] "Generic (PLEG): container finished" podID="d10ff314-d9a8-43bc-a0ad-c821e181b328" containerID="10ad5980f3817b24e3257e36f7d5a84de6e0a8af1ff3e033572e1d42acd22fcb" exitCode=0 Jan 29 08:57:30 crc kubenswrapper[5031]: I0129 08:57:30.989676 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lmq4s" event={"ID":"d10ff314-d9a8-43bc-a0ad-c821e181b328","Type":"ContainerDied","Data":"10ad5980f3817b24e3257e36f7d5a84de6e0a8af1ff3e033572e1d42acd22fcb"} Jan 29 08:57:30 crc kubenswrapper[5031]: I0129 08:57:30.994685 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0ad1ce96-1373-407b-b4ec-700934ef6ac4","Type":"ContainerStarted","Data":"b624e27b4f9d8d4c34744bd7d97875abdac8d4d4bdd0052f1d6932a4abe007bb"} Jan 29 08:57:30 crc kubenswrapper[5031]: I0129 08:57:30.998008 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7" event={"ID":"b34fd049-3d7e-4d5d-acfc-8e4c450bf857","Type":"ContainerStarted","Data":"43acccc78255f31adae65ce3b4e2e1c22806de56a9a7f78b42acbe1a7d54924e"} Jan 29 08:57:30 crc kubenswrapper[5031]: I0129 08:57:30.998092 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-z6mp7" Jan 29 08:57:31 crc kubenswrapper[5031]: I0129 08:57:31.002298 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"11c52100-0b09-4377-b50e-84c78d3ddf74","Type":"ContainerStarted","Data":"7dfbcbf3d482a2b83a536b9c9d3ca3197e82e19c6309d0f218d4bf612ce6200f"} Jan 29 08:57:31 crc kubenswrapper[5031]: I0129 08:57:31.007853 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.629271075 podStartE2EDuration="41.007807949s" podCreationTimestamp="2026-01-29 08:56:50 +0000 UTC" firstStartedPulling="2026-01-29 08:57:10.498645002 +0000 UTC m=+1110.998232954" lastFinishedPulling="2026-01-29 08:57:29.877181876 +0000 UTC m=+1130.376769828" observedRunningTime="2026-01-29 08:57:31.001578981 +0000 UTC m=+1131.501166933" watchObservedRunningTime="2026-01-29 08:57:31.007807949 +0000 UTC m=+1131.507395901" Jan 29 08:57:31 crc kubenswrapper[5031]: I0129 08:57:31.057652 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-z6mp7" podStartSLOduration=33.694153053 podStartE2EDuration="37.057628117s" podCreationTimestamp="2026-01-29 08:56:54 +0000 UTC" firstStartedPulling="2026-01-29 08:57:26.550943033 +0000 UTC m=+1127.050530985" lastFinishedPulling="2026-01-29 08:57:29.914418097 +0000 UTC m=+1130.414006049" observedRunningTime="2026-01-29 08:57:31.056075815 +0000 UTC m=+1131.555663777" watchObservedRunningTime="2026-01-29 08:57:31.057628117 +0000 UTC m=+1131.557216069" Jan 29 08:57:32 crc kubenswrapper[5031]: I0129 08:57:32.014936 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0ad1ce96-1373-407b-b4ec-700934ef6ac4","Type":"ContainerStarted","Data":"8f25f3cac6d9bcb56f3dd635fe9df2d4f6905496346ab492c197624b4a15d302"} Jan 29 08:57:32 crc kubenswrapper[5031]: I0129 08:57:32.017733 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"11c52100-0b09-4377-b50e-84c78d3ddf74","Type":"ContainerStarted","Data":"37e57e15600942ffb09b29ea781fdd0ddb9be07b2da7deb876dc617f22d6227e"} Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.025464 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33700928-aca8-42c5-83f7-a57572d399aa","Type":"ContainerStarted","Data":"ccfd4027ea5b0d27f56953deea7e514b872855290b5537b260ebe1f1a40ac67a"} Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.030247 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7411c3e7-5370-4bc2-85b8-aa1a137d948b","Type":"ContainerStarted","Data":"d1fb82dcf85908a2d91b8bc51df6d4ff308a63d370fcaa2891444a08f679eb32"} Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.030647 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.032622 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lmq4s" event={"ID":"d10ff314-d9a8-43bc-a0ad-c821e181b328","Type":"ContainerStarted","Data":"6273339c862bc1d549efa84bc3653575dea1f5b8109d5a1d241ffeb54c803ef3"} Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.035330 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7149ef7-171a-48eb-a13a-af1982b4fbb1","Type":"ContainerStarted","Data":"d356011767f6cd425016004b8cc10e41ede3ea3cd1dc95d4889092daab5bb213"} Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.108725 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=30.399679909 podStartE2EDuration="37.108705266s" podCreationTimestamp="2026-01-29 08:56:56 +0000 UTC" firstStartedPulling="2026-01-29 08:57:24.948713021 +0000 UTC m=+1125.448300973" lastFinishedPulling="2026-01-29 08:57:31.657738388 +0000 UTC m=+1132.157326330" observedRunningTime="2026-01-29 08:57:33.102081448 +0000 UTC m=+1133.601669410" watchObservedRunningTime="2026-01-29 08:57:33.108705266 +0000 UTC m=+1133.608293218" Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.156450 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=34.237694934 podStartE2EDuration="39.156430578s" podCreationTimestamp="2026-01-29 08:56:54 +0000 UTC" firstStartedPulling="2026-01-29 08:57:26.733875536 +0000 UTC m=+1127.233463488" lastFinishedPulling="2026-01-29 08:57:31.65261118 +0000 UTC m=+1132.152199132" observedRunningTime="2026-01-29 08:57:33.140184062 +0000 UTC m=+1133.639772014" watchObservedRunningTime="2026-01-29 08:57:33.156430578 +0000 UTC m=+1133.656018530" Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.158011 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.724712468 podStartE2EDuration="45.157985599s" podCreationTimestamp="2026-01-29 08:56:48 +0000 UTC" firstStartedPulling="2026-01-29 08:56:49.359408002 +0000 UTC m=+1089.858995954" lastFinishedPulling="2026-01-29 08:57:31.792681133 +0000 UTC m=+1132.292269085" observedRunningTime="2026-01-29 08:57:33.1241323 +0000 UTC m=+1133.623720252" watchObservedRunningTime="2026-01-29 08:57:33.157985599 +0000 UTC m=+1133.657573561" Jan 29 08:57:33 crc kubenswrapper[5031]: I0129 08:57:33.534498 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 08:57:34 crc kubenswrapper[5031]: I0129 08:57:34.043512 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64621a94-8b58-4593-a9d0-58f0dd3c5e0f","Type":"ContainerStarted","Data":"d308cbaf1d8f06db09add169a2872364927af335501f931edf11fcafcddf42c0"} Jan 29 08:57:34 crc kubenswrapper[5031]: I0129 08:57:34.046182 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lmq4s" event={"ID":"d10ff314-d9a8-43bc-a0ad-c821e181b328","Type":"ContainerStarted","Data":"d675c50a75aad595274e26ed8a32ec7fac8236fc23c891a29cd06aa90a4873d7"} Jan 29 08:57:34 crc kubenswrapper[5031]: I0129 08:57:34.046497 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:57:34 crc kubenswrapper[5031]: I0129 08:57:34.047233 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:57:34 crc kubenswrapper[5031]: I0129 08:57:34.111003 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lmq4s" podStartSLOduration=36.29127889 podStartE2EDuration="40.11098s" podCreationTimestamp="2026-01-29 08:56:54 +0000 UTC" firstStartedPulling="2026-01-29 08:57:26.059273714 +0000 UTC m=+1126.558861676" lastFinishedPulling="2026-01-29 08:57:29.878974834 +0000 UTC m=+1130.378562786" observedRunningTime="2026-01-29 08:57:34.104356592 +0000 UTC m=+1134.603944544" watchObservedRunningTime="2026-01-29 08:57:34.11098 +0000 UTC m=+1134.610567952" Jan 29 08:57:34 crc kubenswrapper[5031]: I0129 08:57:34.541942 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 08:57:34 crc kubenswrapper[5031]: I0129 08:57:34.581058 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.028741 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.057082 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9e34c17-fba9-4efa-8912-ede69c516560","Type":"ContainerStarted","Data":"248333fd4f79e20db6d18e37d447343ffb055ab9198e066636271c6a0039cfcd"} Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.072980 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.073333 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.114871 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.121063 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.498160 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdc5j"] Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.550511 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-6rwh7"] Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.562754 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.573602 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.605516 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-6rwh7"] Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.647761 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-khdxz"] Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.649139 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.651812 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.657636 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-khdxz"] Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.674252 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.674294 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.674351 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-config\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.674401 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rf8\" (UniqueName: \"kubernetes.io/projected/c2e2664b-821c-495b-900a-35362def28d2-kube-api-access-22rf8\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775576 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775650 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775705 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775750 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-ovn-rundir\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775796 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-config\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775831 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-config\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775870 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rf8\" (UniqueName: \"kubernetes.io/projected/c2e2664b-821c-495b-900a-35362def28d2-kube-api-access-22rf8\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775909 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-ovs-rundir\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775937 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mzw\" (UniqueName: \"kubernetes.io/projected/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-kube-api-access-k8mzw\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.775961 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-combined-ca-bundle\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.777052 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.777724 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.778392 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-config\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.815252 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rf8\" (UniqueName: \"kubernetes.io/projected/c2e2664b-821c-495b-900a-35362def28d2-kube-api-access-22rf8\") pod \"dnsmasq-dns-6bc7876d45-6rwh7\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.877761 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-config\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.877882 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-ovs-rundir\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.877913 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mzw\" (UniqueName: \"kubernetes.io/projected/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-kube-api-access-k8mzw\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.877937 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-combined-ca-bundle\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.877992 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.878043 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-ovn-rundir\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.878396 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-ovn-rundir\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.878883 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-ovs-rundir\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.879035 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-config\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.882618 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.882826 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-combined-ca-bundle\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.897532 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mzw\" (UniqueName: \"kubernetes.io/projected/8e57b4c5-5c87-4720-9586-c4e7a8cf763f-kube-api-access-k8mzw\") pod \"ovn-controller-metrics-khdxz\" (UID: \"8e57b4c5-5c87-4720-9586-c4e7a8cf763f\") " pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.908694 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:35 crc kubenswrapper[5031]: I0129 08:57:35.986303 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-khdxz" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.015538 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.016901 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.016998 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.021510 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.021720 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.021841 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.022034 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nw4z4" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.042290 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m48cg"] Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.088159 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-hhbcg"] Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.091758 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrt9\" (UniqueName: \"kubernetes.io/projected/2f3941fd-64d1-4652-83b1-e89d547e4df5-kube-api-access-mnrt9\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.091790 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.091811 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3941fd-64d1-4652-83b1-e89d547e4df5-scripts\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.091829 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.091847 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f3941fd-64d1-4652-83b1-e89d547e4df5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.091872 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.091912 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3941fd-64d1-4652-83b1-e89d547e4df5-config\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.092027 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.095641 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.130286 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hhbcg"] Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252415 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252457 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252539 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3941fd-64d1-4652-83b1-e89d547e4df5-config\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252563 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9bj\" (UniqueName: \"kubernetes.io/projected/f2cea483-4915-4fd9-8b38-e257ec143e34-kube-api-access-px9bj\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252627 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-config\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252680 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252702 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrt9\" (UniqueName: \"kubernetes.io/projected/2f3941fd-64d1-4652-83b1-e89d547e4df5-kube-api-access-mnrt9\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252721 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252738 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3941fd-64d1-4652-83b1-e89d547e4df5-scripts\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252757 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252774 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f3941fd-64d1-4652-83b1-e89d547e4df5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.252801 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-dns-svc\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.254243 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3941fd-64d1-4652-83b1-e89d547e4df5-config\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.255312 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f3941fd-64d1-4652-83b1-e89d547e4df5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.256344 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3941fd-64d1-4652-83b1-e89d547e4df5-scripts\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.259455 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.260332 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.267178 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f3941fd-64d1-4652-83b1-e89d547e4df5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.276056 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrt9\" (UniqueName: \"kubernetes.io/projected/2f3941fd-64d1-4652-83b1-e89d547e4df5-kube-api-access-mnrt9\") pod \"ovn-northd-0\" (UID: \"2f3941fd-64d1-4652-83b1-e89d547e4df5\") " pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.335569 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.357291 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9bj\" (UniqueName: \"kubernetes.io/projected/f2cea483-4915-4fd9-8b38-e257ec143e34-kube-api-access-px9bj\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.357414 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-config\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.357527 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.357669 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-dns-svc\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.357696 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.359227 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.359493 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.359596 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-config\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.363924 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-dns-svc\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.384687 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9bj\" (UniqueName: \"kubernetes.io/projected/f2cea483-4915-4fd9-8b38-e257ec143e34-kube-api-access-px9bj\") pod \"dnsmasq-dns-8554648995-hhbcg\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.409457 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.430119 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.458639 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-config\") pod \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.458782 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-dns-svc\") pod \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.458838 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgjk2\" (UniqueName: \"kubernetes.io/projected/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-kube-api-access-tgjk2\") pod \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\" (UID: \"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b\") " Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.460704 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-config" (OuterVolumeSpecName: "config") pod "5ebe3572-e2ba-4a69-b7be-19ad9ee6834b" (UID: "5ebe3572-e2ba-4a69-b7be-19ad9ee6834b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.460760 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ebe3572-e2ba-4a69-b7be-19ad9ee6834b" (UID: "5ebe3572-e2ba-4a69-b7be-19ad9ee6834b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.481650 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-kube-api-access-tgjk2" (OuterVolumeSpecName: "kube-api-access-tgjk2") pod "5ebe3572-e2ba-4a69-b7be-19ad9ee6834b" (UID: "5ebe3572-e2ba-4a69-b7be-19ad9ee6834b"). InnerVolumeSpecName "kube-api-access-tgjk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.589581 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.589628 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgjk2\" (UniqueName: \"kubernetes.io/projected/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-kube-api-access-tgjk2\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.589643 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.637617 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.692318 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-dns-svc\") pod \"49de847e-bdf9-48ea-8d36-e08b4b696a22\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.692431 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spl5k\" (UniqueName: \"kubernetes.io/projected/49de847e-bdf9-48ea-8d36-e08b4b696a22-kube-api-access-spl5k\") pod \"49de847e-bdf9-48ea-8d36-e08b4b696a22\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.692485 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-config\") pod \"49de847e-bdf9-48ea-8d36-e08b4b696a22\" (UID: \"49de847e-bdf9-48ea-8d36-e08b4b696a22\") " Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.692850 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49de847e-bdf9-48ea-8d36-e08b4b696a22" (UID: "49de847e-bdf9-48ea-8d36-e08b4b696a22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.693001 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.695755 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-config" (OuterVolumeSpecName: "config") pod "49de847e-bdf9-48ea-8d36-e08b4b696a22" (UID: "49de847e-bdf9-48ea-8d36-e08b4b696a22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.695973 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49de847e-bdf9-48ea-8d36-e08b4b696a22-kube-api-access-spl5k" (OuterVolumeSpecName: "kube-api-access-spl5k") pod "49de847e-bdf9-48ea-8d36-e08b4b696a22" (UID: "49de847e-bdf9-48ea-8d36-e08b4b696a22"). InnerVolumeSpecName "kube-api-access-spl5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.794502 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spl5k\" (UniqueName: \"kubernetes.io/projected/49de847e-bdf9-48ea-8d36-e08b4b696a22-kube-api-access-spl5k\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.794549 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49de847e-bdf9-48ea-8d36-e08b4b696a22-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:36 crc kubenswrapper[5031]: W0129 08:57:36.826724 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e57b4c5_5c87_4720_9586_c4e7a8cf763f.slice/crio-b4fcaec1527f47632feed6d5a8a9c0f847f193e1f48c7f0f15ec948d39f4577b WatchSource:0}: Error finding container b4fcaec1527f47632feed6d5a8a9c0f847f193e1f48c7f0f15ec948d39f4577b: Status 404 returned error can't find the container with id b4fcaec1527f47632feed6d5a8a9c0f847f193e1f48c7f0f15ec948d39f4577b Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.834697 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-khdxz"] Jan 29 08:57:36 crc kubenswrapper[5031]: I0129 08:57:36.847442 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-6rwh7"] Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.002765 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.073662 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hhbcg"] Jan 29 08:57:37 crc kubenswrapper[5031]: W0129 08:57:37.078730 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2cea483_4915_4fd9_8b38_e257ec143e34.slice/crio-2eb87de28370612361ff37b3e2d1375639e3b3d29be6258909e5b111e57ad558 WatchSource:0}: Error finding container 2eb87de28370612361ff37b3e2d1375639e3b3d29be6258909e5b111e57ad558: Status 404 returned error can't find the container with id 2eb87de28370612361ff37b3e2d1375639e3b3d29be6258909e5b111e57ad558 Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.129159 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" event={"ID":"5ebe3572-e2ba-4a69-b7be-19ad9ee6834b","Type":"ContainerDied","Data":"83321b7015d98e0dee3b1be2bf5bce4d68d18e102011c08e3361cd0b5a8086ad"} Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.129292 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdc5j" Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.134198 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hhbcg" event={"ID":"f2cea483-4915-4fd9-8b38-e257ec143e34","Type":"ContainerStarted","Data":"2eb87de28370612361ff37b3e2d1375639e3b3d29be6258909e5b111e57ad558"} Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.137893 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2f3941fd-64d1-4652-83b1-e89d547e4df5","Type":"ContainerStarted","Data":"120fc0dff71b03368b2295fa15c294623347fa3e67bf0bd23b491ae275a37fba"} Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.141484 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" event={"ID":"c2e2664b-821c-495b-900a-35362def28d2","Type":"ContainerStarted","Data":"3eb62763ca22cdd747f9f7187b93262cc0844dd7f71b1f586eafa412f5486c5a"} Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.145006 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-khdxz" event={"ID":"8e57b4c5-5c87-4720-9586-c4e7a8cf763f","Type":"ContainerStarted","Data":"b4fcaec1527f47632feed6d5a8a9c0f847f193e1f48c7f0f15ec948d39f4577b"} Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.146695 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" event={"ID":"49de847e-bdf9-48ea-8d36-e08b4b696a22","Type":"ContainerDied","Data":"a05cc8dcf10e863b2dcb9e65a5870a507fbf10a454ce06e362d3e0b53163e8c6"} Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.146715 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m48cg" Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.211152 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdc5j"] Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.247401 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdc5j"] Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.260335 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m48cg"] Jan 29 08:57:37 crc kubenswrapper[5031]: I0129 08:57:37.266440 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m48cg"] Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.164089 5031 generic.go:334] "Generic (PLEG): container finished" podID="33700928-aca8-42c5-83f7-a57572d399aa" containerID="ccfd4027ea5b0d27f56953deea7e514b872855290b5537b260ebe1f1a40ac67a" exitCode=0 Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.164268 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33700928-aca8-42c5-83f7-a57572d399aa","Type":"ContainerDied","Data":"ccfd4027ea5b0d27f56953deea7e514b872855290b5537b260ebe1f1a40ac67a"} Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.171599 5031 generic.go:334] "Generic (PLEG): container finished" podID="c2e2664b-821c-495b-900a-35362def28d2" containerID="7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4" exitCode=0 Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.171704 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" event={"ID":"c2e2664b-821c-495b-900a-35362def28d2","Type":"ContainerDied","Data":"7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4"} Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.192015 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-khdxz" event={"ID":"8e57b4c5-5c87-4720-9586-c4e7a8cf763f","Type":"ContainerStarted","Data":"a71abed332fc5e81777eb13823b4c7b949685822f720b2dcf15f9f2d4cc86125"} Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.229074 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-khdxz" podStartSLOduration=3.229051616 podStartE2EDuration="3.229051616s" podCreationTimestamp="2026-01-29 08:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:57:38.220037764 +0000 UTC m=+1138.719625716" watchObservedRunningTime="2026-01-29 08:57:38.229051616 +0000 UTC m=+1138.728639568" Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.301704 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49de847e-bdf9-48ea-8d36-e08b4b696a22" path="/var/lib/kubelet/pods/49de847e-bdf9-48ea-8d36-e08b4b696a22/volumes" Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.302147 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebe3572-e2ba-4a69-b7be-19ad9ee6834b" path="/var/lib/kubelet/pods/5ebe3572-e2ba-4a69-b7be-19ad9ee6834b/volumes" Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.493921 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.494223 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:57:38 crc kubenswrapper[5031]: I0129 08:57:38.590231 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.202131 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33700928-aca8-42c5-83f7-a57572d399aa","Type":"ContainerStarted","Data":"7d53cf09af83a85f244f28652fe619f2e5333ebef4b8424119fb42867661386d"} Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.204469 5031 generic.go:334] "Generic (PLEG): container finished" podID="a7149ef7-171a-48eb-a13a-af1982b4fbb1" containerID="d356011767f6cd425016004b8cc10e41ede3ea3cd1dc95d4889092daab5bb213" exitCode=0 Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.204552 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7149ef7-171a-48eb-a13a-af1982b4fbb1","Type":"ContainerDied","Data":"d356011767f6cd425016004b8cc10e41ede3ea3cd1dc95d4889092daab5bb213"} Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.206455 5031 generic.go:334] "Generic (PLEG): container finished" podID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerID="8fe0f7777770b4c1c59f187104be805eb404c082aff018c3f5d840910cdb4e2c" exitCode=0 Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.206541 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hhbcg" event={"ID":"f2cea483-4915-4fd9-8b38-e257ec143e34","Type":"ContainerDied","Data":"8fe0f7777770b4c1c59f187104be805eb404c082aff018c3f5d840910cdb4e2c"} Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.208705 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2f3941fd-64d1-4652-83b1-e89d547e4df5","Type":"ContainerStarted","Data":"6d78942fd267222c813d00703ff05e44a5029a1fde7fa17953a7d8cd0cfeeee8"} Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.208745 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2f3941fd-64d1-4652-83b1-e89d547e4df5","Type":"ContainerStarted","Data":"2925859888dda15287c5be4ce2d0bce2a6ae75a04241ee19246482cf22b46650"} Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.209312 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.213287 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" event={"ID":"c2e2664b-821c-495b-900a-35362def28d2","Type":"ContainerStarted","Data":"10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0"} Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.230248 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.771176879 podStartE2EDuration="54.230231221s" podCreationTimestamp="2026-01-29 08:56:45 +0000 UTC" firstStartedPulling="2026-01-29 08:56:48.33248151 +0000 UTC m=+1088.832069472" lastFinishedPulling="2026-01-29 08:57:31.791535862 +0000 UTC m=+1132.291123814" observedRunningTime="2026-01-29 08:57:39.227193489 +0000 UTC m=+1139.726781461" watchObservedRunningTime="2026-01-29 08:57:39.230231221 +0000 UTC m=+1139.729819173" Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.283118 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" podStartSLOduration=3.809032805 podStartE2EDuration="4.283103591s" podCreationTimestamp="2026-01-29 08:57:35 +0000 UTC" firstStartedPulling="2026-01-29 08:57:36.824947016 +0000 UTC m=+1137.324534968" lastFinishedPulling="2026-01-29 08:57:37.299017802 +0000 UTC m=+1137.798605754" observedRunningTime="2026-01-29 08:57:39.280319076 +0000 UTC m=+1139.779907028" watchObservedRunningTime="2026-01-29 08:57:39.283103591 +0000 UTC m=+1139.782691543" Jan 29 08:57:39 crc kubenswrapper[5031]: I0129 08:57:39.307178 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.261085066 podStartE2EDuration="4.307157307s" podCreationTimestamp="2026-01-29 08:57:35 +0000 UTC" firstStartedPulling="2026-01-29 08:57:37.0104297 +0000 UTC m=+1137.510017652" lastFinishedPulling="2026-01-29 08:57:38.056501941 +0000 UTC m=+1138.556089893" observedRunningTime="2026-01-29 08:57:39.300619682 +0000 UTC m=+1139.800207644" watchObservedRunningTime="2026-01-29 08:57:39.307157307 +0000 UTC m=+1139.806745259" Jan 29 08:57:40 crc kubenswrapper[5031]: I0129 08:57:40.224357 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7149ef7-171a-48eb-a13a-af1982b4fbb1","Type":"ContainerStarted","Data":"50a23ab8b1299e9c7d8f471cf71b68311676bca25e10ceec21988791a1507fd6"} Jan 29 08:57:40 crc kubenswrapper[5031]: I0129 08:57:40.227065 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hhbcg" event={"ID":"f2cea483-4915-4fd9-8b38-e257ec143e34","Type":"ContainerStarted","Data":"385c3349f858fd4c97dceb024fced3287400977035b20f1585f15a44f8dc3b5a"} Jan 29 08:57:40 crc kubenswrapper[5031]: I0129 08:57:40.227738 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:40 crc kubenswrapper[5031]: I0129 08:57:40.274701 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.73377447 podStartE2EDuration="53.274678708s" podCreationTimestamp="2026-01-29 08:56:47 +0000 UTC" firstStartedPulling="2026-01-29 08:56:49.29253957 +0000 UTC m=+1089.792127522" lastFinishedPulling="2026-01-29 08:57:31.833443808 +0000 UTC m=+1132.333031760" observedRunningTime="2026-01-29 08:57:40.249229745 +0000 UTC m=+1140.748817707" watchObservedRunningTime="2026-01-29 08:57:40.274678708 +0000 UTC m=+1140.774266660" Jan 29 08:57:40 crc kubenswrapper[5031]: I0129 08:57:40.280910 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-hhbcg" podStartSLOduration=3.619088708 podStartE2EDuration="4.280897896s" podCreationTimestamp="2026-01-29 08:57:36 +0000 UTC" firstStartedPulling="2026-01-29 08:57:37.082993999 +0000 UTC m=+1137.582581961" lastFinishedPulling="2026-01-29 08:57:37.744803187 +0000 UTC m=+1138.244391149" observedRunningTime="2026-01-29 08:57:40.273670891 +0000 UTC m=+1140.773258853" watchObservedRunningTime="2026-01-29 08:57:40.280897896 +0000 UTC m=+1140.780485848" Jan 29 08:57:41 crc kubenswrapper[5031]: I0129 08:57:41.233971 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:41 crc kubenswrapper[5031]: I0129 08:57:41.340064 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 08:57:45 crc kubenswrapper[5031]: I0129 08:57:45.911395 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:46 crc kubenswrapper[5031]: I0129 08:57:46.431571 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:57:46 crc kubenswrapper[5031]: I0129 08:57:46.497182 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-6rwh7"] Jan 29 08:57:46 crc kubenswrapper[5031]: I0129 08:57:46.497422 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" podUID="c2e2664b-821c-495b-900a-35362def28d2" containerName="dnsmasq-dns" containerID="cri-o://10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0" gracePeriod=10 Jan 29 08:57:46 crc kubenswrapper[5031]: I0129 08:57:46.971690 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.083211 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22rf8\" (UniqueName: \"kubernetes.io/projected/c2e2664b-821c-495b-900a-35362def28d2-kube-api-access-22rf8\") pod \"c2e2664b-821c-495b-900a-35362def28d2\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.083302 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-dns-svc\") pod \"c2e2664b-821c-495b-900a-35362def28d2\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.083435 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-ovsdbserver-sb\") pod \"c2e2664b-821c-495b-900a-35362def28d2\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.083466 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-config\") pod \"c2e2664b-821c-495b-900a-35362def28d2\" (UID: \"c2e2664b-821c-495b-900a-35362def28d2\") " Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.089061 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e2664b-821c-495b-900a-35362def28d2-kube-api-access-22rf8" (OuterVolumeSpecName: "kube-api-access-22rf8") pod "c2e2664b-821c-495b-900a-35362def28d2" (UID: "c2e2664b-821c-495b-900a-35362def28d2"). InnerVolumeSpecName "kube-api-access-22rf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.129746 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-config" (OuterVolumeSpecName: "config") pod "c2e2664b-821c-495b-900a-35362def28d2" (UID: "c2e2664b-821c-495b-900a-35362def28d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.129923 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2e2664b-821c-495b-900a-35362def28d2" (UID: "c2e2664b-821c-495b-900a-35362def28d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.131988 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2e2664b-821c-495b-900a-35362def28d2" (UID: "c2e2664b-821c-495b-900a-35362def28d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.185440 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.185478 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.185489 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e2664b-821c-495b-900a-35362def28d2-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.185499 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22rf8\" (UniqueName: \"kubernetes.io/projected/c2e2664b-821c-495b-900a-35362def28d2-kube-api-access-22rf8\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.214018 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.214078 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.284388 5031 generic.go:334] "Generic (PLEG): container finished" podID="c2e2664b-821c-495b-900a-35362def28d2" containerID="10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0" exitCode=0 Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.284426 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" event={"ID":"c2e2664b-821c-495b-900a-35362def28d2","Type":"ContainerDied","Data":"10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0"} Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.284445 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" event={"ID":"c2e2664b-821c-495b-900a-35362def28d2","Type":"ContainerDied","Data":"3eb62763ca22cdd747f9f7187b93262cc0844dd7f71b1f586eafa412f5486c5a"} Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.284445 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-6rwh7" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.284465 5031 scope.go:117] "RemoveContainer" containerID="10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.296530 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.300577 5031 scope.go:117] "RemoveContainer" containerID="7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.321524 5031 scope.go:117] "RemoveContainer" containerID="10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0" Jan 29 08:57:47 crc kubenswrapper[5031]: E0129 08:57:47.322083 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0\": container with ID starting with 10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0 not found: ID does not exist" containerID="10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.322133 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0"} err="failed to get container status \"10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0\": rpc error: code = NotFound desc = could not find container \"10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0\": container with ID starting with 10b362053ba62a798cad2cb13e3a53c51da5a83583ee106b5b53b751f8e43de0 not found: ID does not exist" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.322159 5031 scope.go:117] "RemoveContainer" containerID="7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4" Jan 29 08:57:47 crc kubenswrapper[5031]: E0129 08:57:47.322836 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4\": container with ID starting with 7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4 not found: ID does not exist" containerID="7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.322856 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4"} err="failed to get container status \"7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4\": rpc error: code = NotFound desc = could not find container \"7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4\": container with ID starting with 7c404f95c25c488b4a3dc596bb2d07e93573e5ae234fa5f12a1fcc2df9943ac4 not found: ID does not exist" Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.335660 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-6rwh7"] Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.341852 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-6rwh7"] Jan 29 08:57:47 crc kubenswrapper[5031]: I0129 08:57:47.380738 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.256204 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-j8wj5"] Jan 29 08:57:48 crc kubenswrapper[5031]: E0129 08:57:48.256600 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e2664b-821c-495b-900a-35362def28d2" containerName="dnsmasq-dns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.256613 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e2664b-821c-495b-900a-35362def28d2" containerName="dnsmasq-dns" Jan 29 08:57:48 crc kubenswrapper[5031]: E0129 08:57:48.256641 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e2664b-821c-495b-900a-35362def28d2" containerName="init" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.256650 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e2664b-821c-495b-900a-35362def28d2" containerName="init" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.256819 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e2664b-821c-495b-900a-35362def28d2" containerName="dnsmasq-dns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.257326 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.266486 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-04ac-account-create-update-8pqhd"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.267610 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.273705 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-j8wj5"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.273949 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.306551 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4401fb39-e95c-475e-8f56-c251f9f2247f-operator-scripts\") pod \"keystone-04ac-account-create-update-8pqhd\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.306740 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7863fb67-80a0-474b-9b3a-f75062688a55-operator-scripts\") pod \"keystone-db-create-j8wj5\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.306850 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwjd\" (UniqueName: \"kubernetes.io/projected/4401fb39-e95c-475e-8f56-c251f9f2247f-kube-api-access-2jwjd\") pod \"keystone-04ac-account-create-update-8pqhd\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.306990 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/7863fb67-80a0-474b-9b3a-f75062688a55-kube-api-access-6gxz4\") pod \"keystone-db-create-j8wj5\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.310162 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e2664b-821c-495b-900a-35362def28d2" path="/var/lib/kubelet/pods/c2e2664b-821c-495b-900a-35362def28d2/volumes" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.310894 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-04ac-account-create-update-8pqhd"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.409273 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/7863fb67-80a0-474b-9b3a-f75062688a55-kube-api-access-6gxz4\") pod \"keystone-db-create-j8wj5\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.409406 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4401fb39-e95c-475e-8f56-c251f9f2247f-operator-scripts\") pod \"keystone-04ac-account-create-update-8pqhd\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.409434 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7863fb67-80a0-474b-9b3a-f75062688a55-operator-scripts\") pod \"keystone-db-create-j8wj5\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.409477 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jwjd\" (UniqueName: \"kubernetes.io/projected/4401fb39-e95c-475e-8f56-c251f9f2247f-kube-api-access-2jwjd\") pod \"keystone-04ac-account-create-update-8pqhd\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.410441 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7863fb67-80a0-474b-9b3a-f75062688a55-operator-scripts\") pod \"keystone-db-create-j8wj5\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.410571 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4401fb39-e95c-475e-8f56-c251f9f2247f-operator-scripts\") pod \"keystone-04ac-account-create-update-8pqhd\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.428472 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/7863fb67-80a0-474b-9b3a-f75062688a55-kube-api-access-6gxz4\") pod \"keystone-db-create-j8wj5\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.430033 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jwjd\" (UniqueName: \"kubernetes.io/projected/4401fb39-e95c-475e-8f56-c251f9f2247f-kube-api-access-2jwjd\") pod \"keystone-04ac-account-create-update-8pqhd\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.431222 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.431262 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.516026 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.520180 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nsfns"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.521656 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.533348 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nsfns"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.574877 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.607413 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.612731 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-operator-scripts\") pod \"placement-db-create-nsfns\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.612830 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltfsf\" (UniqueName: \"kubernetes.io/projected/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-kube-api-access-ltfsf\") pod \"placement-db-create-nsfns\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.671749 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-63c5-account-create-update-rrljl"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.674782 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.677673 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.687784 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-63c5-account-create-update-rrljl"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.732991 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-operator-scripts\") pod \"placement-db-create-nsfns\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.733077 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltfsf\" (UniqueName: \"kubernetes.io/projected/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-kube-api-access-ltfsf\") pod \"placement-db-create-nsfns\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.733833 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-operator-scripts\") pod \"placement-db-create-nsfns\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.759350 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltfsf\" (UniqueName: \"kubernetes.io/projected/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-kube-api-access-ltfsf\") pod \"placement-db-create-nsfns\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.794254 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dmnkv"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.797937 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.811498 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dmnkv"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.834439 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fee06e8-d5a9-4552-9f69-353f9666a3f2-operator-scripts\") pod \"placement-63c5-account-create-update-rrljl\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.834517 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpzr\" (UniqueName: \"kubernetes.io/projected/3fee06e8-d5a9-4552-9f69-353f9666a3f2-kube-api-access-9zpzr\") pod \"placement-63c5-account-create-update-rrljl\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.851148 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nsfns" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.879334 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-66e1-account-create-update-mrd9n"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.880318 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.885355 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.897405 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-66e1-account-create-update-mrd9n"] Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.935688 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9ht\" (UniqueName: \"kubernetes.io/projected/670cace3-776d-44d9-91d9-fdcdd5ba1c89-kube-api-access-xc9ht\") pod \"glance-db-create-dmnkv\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.935993 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fee06e8-d5a9-4552-9f69-353f9666a3f2-operator-scripts\") pod \"placement-63c5-account-create-update-rrljl\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.936033 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zpzr\" (UniqueName: \"kubernetes.io/projected/3fee06e8-d5a9-4552-9f69-353f9666a3f2-kube-api-access-9zpzr\") pod \"placement-63c5-account-create-update-rrljl\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.936093 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cace3-776d-44d9-91d9-fdcdd5ba1c89-operator-scripts\") pod \"glance-db-create-dmnkv\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.936954 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fee06e8-d5a9-4552-9f69-353f9666a3f2-operator-scripts\") pod \"placement-63c5-account-create-update-rrljl\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:48 crc kubenswrapper[5031]: I0129 08:57:48.954172 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zpzr\" (UniqueName: \"kubernetes.io/projected/3fee06e8-d5a9-4552-9f69-353f9666a3f2-kube-api-access-9zpzr\") pod \"placement-63c5-account-create-update-rrljl\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.037660 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cace3-776d-44d9-91d9-fdcdd5ba1c89-operator-scripts\") pod \"glance-db-create-dmnkv\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.037728 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nwb4\" (UniqueName: \"kubernetes.io/projected/b8aff16d-f588-4c13-be4a-f2cc4bef00df-kube-api-access-4nwb4\") pod \"glance-66e1-account-create-update-mrd9n\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.037762 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aff16d-f588-4c13-be4a-f2cc4bef00df-operator-scripts\") pod \"glance-66e1-account-create-update-mrd9n\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.037842 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc9ht\" (UniqueName: \"kubernetes.io/projected/670cace3-776d-44d9-91d9-fdcdd5ba1c89-kube-api-access-xc9ht\") pod \"glance-db-create-dmnkv\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.041915 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cace3-776d-44d9-91d9-fdcdd5ba1c89-operator-scripts\") pod \"glance-db-create-dmnkv\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.043846 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.059516 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc9ht\" (UniqueName: \"kubernetes.io/projected/670cace3-776d-44d9-91d9-fdcdd5ba1c89-kube-api-access-xc9ht\") pod \"glance-db-create-dmnkv\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.123949 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.139280 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nwb4\" (UniqueName: \"kubernetes.io/projected/b8aff16d-f588-4c13-be4a-f2cc4bef00df-kube-api-access-4nwb4\") pod \"glance-66e1-account-create-update-mrd9n\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.139334 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aff16d-f588-4c13-be4a-f2cc4bef00df-operator-scripts\") pod \"glance-66e1-account-create-update-mrd9n\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.140187 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aff16d-f588-4c13-be4a-f2cc4bef00df-operator-scripts\") pod \"glance-66e1-account-create-update-mrd9n\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.156134 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-04ac-account-create-update-8pqhd"] Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.163265 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-j8wj5"] Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.165095 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nwb4\" (UniqueName: \"kubernetes.io/projected/b8aff16d-f588-4c13-be4a-f2cc4bef00df-kube-api-access-4nwb4\") pod \"glance-66e1-account-create-update-mrd9n\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:49 crc kubenswrapper[5031]: W0129 08:57:49.170077 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7863fb67_80a0_474b_9b3a_f75062688a55.slice/crio-f7eceed808ca0a6b89ce870b388759382121ea33328affb505132c6acfe5183f WatchSource:0}: Error finding container f7eceed808ca0a6b89ce870b388759382121ea33328affb505132c6acfe5183f: Status 404 returned error can't find the container with id f7eceed808ca0a6b89ce870b388759382121ea33328affb505132c6acfe5183f Jan 29 08:57:49 crc kubenswrapper[5031]: W0129 08:57:49.174931 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4401fb39_e95c_475e_8f56_c251f9f2247f.slice/crio-f6cbe81494fe4f63c219c798e77dfcb61f4453426d43cc4c34b7aa7f624842d2 WatchSource:0}: Error finding container f6cbe81494fe4f63c219c798e77dfcb61f4453426d43cc4c34b7aa7f624842d2: Status 404 returned error can't find the container with id f6cbe81494fe4f63c219c798e77dfcb61f4453426d43cc4c34b7aa7f624842d2 Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.266858 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.315288 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j8wj5" event={"ID":"7863fb67-80a0-474b-9b3a-f75062688a55","Type":"ContainerStarted","Data":"f7eceed808ca0a6b89ce870b388759382121ea33328affb505132c6acfe5183f"} Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.316183 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nsfns"] Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.319390 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-04ac-account-create-update-8pqhd" event={"ID":"4401fb39-e95c-475e-8f56-c251f9f2247f","Type":"ContainerStarted","Data":"f6cbe81494fe4f63c219c798e77dfcb61f4453426d43cc4c34b7aa7f624842d2"} Jan 29 08:57:49 crc kubenswrapper[5031]: W0129 08:57:49.327744 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod385ba6d3_a61c_413d_bb8f_ff1d5ccd6a2a.slice/crio-53db2f446cd711173ecd218d71fdd7e911f7a69fa67aeeaba36a132e2b600548 WatchSource:0}: Error finding container 53db2f446cd711173ecd218d71fdd7e911f7a69fa67aeeaba36a132e2b600548: Status 404 returned error can't find the container with id 53db2f446cd711173ecd218d71fdd7e911f7a69fa67aeeaba36a132e2b600548 Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.427158 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.530670 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-63c5-account-create-update-rrljl"] Jan 29 08:57:49 crc kubenswrapper[5031]: W0129 08:57:49.543572 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fee06e8_d5a9_4552_9f69_353f9666a3f2.slice/crio-c8ca600c76ba9d599356bafc7036720f68ece547778cfcc8b636cc3f1e5bf324 WatchSource:0}: Error finding container c8ca600c76ba9d599356bafc7036720f68ece547778cfcc8b636cc3f1e5bf324: Status 404 returned error can't find the container with id c8ca600c76ba9d599356bafc7036720f68ece547778cfcc8b636cc3f1e5bf324 Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.614634 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dmnkv"] Jan 29 08:57:49 crc kubenswrapper[5031]: I0129 08:57:49.730340 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-66e1-account-create-update-mrd9n"] Jan 29 08:57:50 crc kubenswrapper[5031]: I0129 08:57:50.329932 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-66e1-account-create-update-mrd9n" event={"ID":"b8aff16d-f588-4c13-be4a-f2cc4bef00df","Type":"ContainerStarted","Data":"c55e007296ee734add2cd43668dde0dc8e3031f7b6ea249efce3917403247e3d"} Jan 29 08:57:50 crc kubenswrapper[5031]: I0129 08:57:50.340652 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-63c5-account-create-update-rrljl" event={"ID":"3fee06e8-d5a9-4552-9f69-353f9666a3f2","Type":"ContainerStarted","Data":"c8ca600c76ba9d599356bafc7036720f68ece547778cfcc8b636cc3f1e5bf324"} Jan 29 08:57:50 crc kubenswrapper[5031]: I0129 08:57:50.345879 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dmnkv" event={"ID":"670cace3-776d-44d9-91d9-fdcdd5ba1c89","Type":"ContainerStarted","Data":"79760a68e96b2f91f14ebb320b5eb1645cd4c0edeb5766226f56b22ecb1c4c7d"} Jan 29 08:57:50 crc kubenswrapper[5031]: I0129 08:57:50.347415 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nsfns" event={"ID":"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a","Type":"ContainerStarted","Data":"53db2f446cd711173ecd218d71fdd7e911f7a69fa67aeeaba36a132e2b600548"} Jan 29 08:57:51 crc kubenswrapper[5031]: I0129 08:57:51.357145 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-63c5-account-create-update-rrljl" event={"ID":"3fee06e8-d5a9-4552-9f69-353f9666a3f2","Type":"ContainerStarted","Data":"a69d1a6e5c97193ffce3f3eb6768a0f1a11f9370b6aacfcd10e772597294fbab"} Jan 29 08:57:51 crc kubenswrapper[5031]: I0129 08:57:51.359387 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dmnkv" event={"ID":"670cace3-776d-44d9-91d9-fdcdd5ba1c89","Type":"ContainerStarted","Data":"60dfb9aa64b85c3cab9504d2ba64c04a2bb226b42153795c932af735c8855450"} Jan 29 08:57:51 crc kubenswrapper[5031]: I0129 08:57:51.361328 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-66e1-account-create-update-mrd9n" event={"ID":"b8aff16d-f588-4c13-be4a-f2cc4bef00df","Type":"ContainerStarted","Data":"be0a93399a4854262399f0a1ee1dedec38e8192ec116fa3b01b011375fe8b7af"} Jan 29 08:57:51 crc kubenswrapper[5031]: I0129 08:57:51.378823 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-63c5-account-create-update-rrljl" podStartSLOduration=3.378801817 podStartE2EDuration="3.378801817s" podCreationTimestamp="2026-01-29 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:57:51.375692213 +0000 UTC m=+1151.875280165" watchObservedRunningTime="2026-01-29 08:57:51.378801817 +0000 UTC m=+1151.878389769" Jan 29 08:57:51 crc kubenswrapper[5031]: I0129 08:57:51.392691 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-dmnkv" podStartSLOduration=3.392673029 podStartE2EDuration="3.392673029s" podCreationTimestamp="2026-01-29 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:57:51.388727693 +0000 UTC m=+1151.888315655" watchObservedRunningTime="2026-01-29 08:57:51.392673029 +0000 UTC m=+1151.892260981" Jan 29 08:57:51 crc kubenswrapper[5031]: I0129 08:57:51.413546 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-66e1-account-create-update-mrd9n" podStartSLOduration=3.41353024 podStartE2EDuration="3.41353024s" podCreationTimestamp="2026-01-29 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:57:51.412177714 +0000 UTC m=+1151.911765666" watchObservedRunningTime="2026-01-29 08:57:51.41353024 +0000 UTC m=+1151.913118192" Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.371903 5031 generic.go:334] "Generic (PLEG): container finished" podID="7863fb67-80a0-474b-9b3a-f75062688a55" containerID="53e59f6e812140e255503664f91c80519b4f73f97c3aa25b86d202420c769ef4" exitCode=0 Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.371992 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j8wj5" event={"ID":"7863fb67-80a0-474b-9b3a-f75062688a55","Type":"ContainerDied","Data":"53e59f6e812140e255503664f91c80519b4f73f97c3aa25b86d202420c769ef4"} Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.373584 5031 generic.go:334] "Generic (PLEG): container finished" podID="670cace3-776d-44d9-91d9-fdcdd5ba1c89" containerID="60dfb9aa64b85c3cab9504d2ba64c04a2bb226b42153795c932af735c8855450" exitCode=0 Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.373641 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dmnkv" event={"ID":"670cace3-776d-44d9-91d9-fdcdd5ba1c89","Type":"ContainerDied","Data":"60dfb9aa64b85c3cab9504d2ba64c04a2bb226b42153795c932af735c8855450"} Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.375186 5031 generic.go:334] "Generic (PLEG): container finished" podID="4401fb39-e95c-475e-8f56-c251f9f2247f" containerID="d27a2e4a1417aad474522323cd55b645a0cbdee017f8a1eca19ac943c03430cf" exitCode=0 Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.375236 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-04ac-account-create-update-8pqhd" event={"ID":"4401fb39-e95c-475e-8f56-c251f9f2247f","Type":"ContainerDied","Data":"d27a2e4a1417aad474522323cd55b645a0cbdee017f8a1eca19ac943c03430cf"} Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.376860 5031 generic.go:334] "Generic (PLEG): container finished" podID="385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a" containerID="20962b37eaf7a67d3307bc2d81d0178c4ad97215d53f4a87129505fee765c996" exitCode=0 Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.376925 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nsfns" event={"ID":"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a","Type":"ContainerDied","Data":"20962b37eaf7a67d3307bc2d81d0178c4ad97215d53f4a87129505fee765c996"} Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.379205 5031 generic.go:334] "Generic (PLEG): container finished" podID="b8aff16d-f588-4c13-be4a-f2cc4bef00df" containerID="be0a93399a4854262399f0a1ee1dedec38e8192ec116fa3b01b011375fe8b7af" exitCode=0 Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.379267 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-66e1-account-create-update-mrd9n" event={"ID":"b8aff16d-f588-4c13-be4a-f2cc4bef00df","Type":"ContainerDied","Data":"be0a93399a4854262399f0a1ee1dedec38e8192ec116fa3b01b011375fe8b7af"} Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.380588 5031 generic.go:334] "Generic (PLEG): container finished" podID="3fee06e8-d5a9-4552-9f69-353f9666a3f2" containerID="a69d1a6e5c97193ffce3f3eb6768a0f1a11f9370b6aacfcd10e772597294fbab" exitCode=0 Jan 29 08:57:52 crc kubenswrapper[5031]: I0129 08:57:52.380626 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-63c5-account-create-update-rrljl" event={"ID":"3fee06e8-d5a9-4552-9f69-353f9666a3f2","Type":"ContainerDied","Data":"a69d1a6e5c97193ffce3f3eb6768a0f1a11f9370b6aacfcd10e772597294fbab"} Jan 29 08:57:53 crc kubenswrapper[5031]: I0129 08:57:53.804564 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nsfns" Jan 29 08:57:53 crc kubenswrapper[5031]: I0129 08:57:53.922106 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-operator-scripts\") pod \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " Jan 29 08:57:53 crc kubenswrapper[5031]: I0129 08:57:53.922236 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltfsf\" (UniqueName: \"kubernetes.io/projected/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-kube-api-access-ltfsf\") pod \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\" (UID: \"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a\") " Jan 29 08:57:53 crc kubenswrapper[5031]: I0129 08:57:53.924340 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a" (UID: "385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:53 crc kubenswrapper[5031]: I0129 08:57:53.938915 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-kube-api-access-ltfsf" (OuterVolumeSpecName: "kube-api-access-ltfsf") pod "385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a" (UID: "385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a"). InnerVolumeSpecName "kube-api-access-ltfsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.024309 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.024346 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltfsf\" (UniqueName: \"kubernetes.io/projected/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a-kube-api-access-ltfsf\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.032167 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.037558 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.049143 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.057895 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.069835 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.125229 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zpzr\" (UniqueName: \"kubernetes.io/projected/3fee06e8-d5a9-4552-9f69-353f9666a3f2-kube-api-access-9zpzr\") pod \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.125281 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4401fb39-e95c-475e-8f56-c251f9f2247f-operator-scripts\") pod \"4401fb39-e95c-475e-8f56-c251f9f2247f\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.125308 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jwjd\" (UniqueName: \"kubernetes.io/projected/4401fb39-e95c-475e-8f56-c251f9f2247f-kube-api-access-2jwjd\") pod \"4401fb39-e95c-475e-8f56-c251f9f2247f\" (UID: \"4401fb39-e95c-475e-8f56-c251f9f2247f\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.125341 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aff16d-f588-4c13-be4a-f2cc4bef00df-operator-scripts\") pod \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.125394 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cace3-776d-44d9-91d9-fdcdd5ba1c89-operator-scripts\") pod \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.126307 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4401fb39-e95c-475e-8f56-c251f9f2247f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4401fb39-e95c-475e-8f56-c251f9f2247f" (UID: "4401fb39-e95c-475e-8f56-c251f9f2247f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.126384 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc9ht\" (UniqueName: \"kubernetes.io/projected/670cace3-776d-44d9-91d9-fdcdd5ba1c89-kube-api-access-xc9ht\") pod \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\" (UID: \"670cace3-776d-44d9-91d9-fdcdd5ba1c89\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.126503 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8aff16d-f588-4c13-be4a-f2cc4bef00df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8aff16d-f588-4c13-be4a-f2cc4bef00df" (UID: "b8aff16d-f588-4c13-be4a-f2cc4bef00df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.126522 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fee06e8-d5a9-4552-9f69-353f9666a3f2-operator-scripts\") pod \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\" (UID: \"3fee06e8-d5a9-4552-9f69-353f9666a3f2\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.126805 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670cace3-776d-44d9-91d9-fdcdd5ba1c89-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "670cace3-776d-44d9-91d9-fdcdd5ba1c89" (UID: "670cace3-776d-44d9-91d9-fdcdd5ba1c89"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.126878 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nwb4\" (UniqueName: \"kubernetes.io/projected/b8aff16d-f588-4c13-be4a-f2cc4bef00df-kube-api-access-4nwb4\") pod \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\" (UID: \"b8aff16d-f588-4c13-be4a-f2cc4bef00df\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.127036 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fee06e8-d5a9-4552-9f69-353f9666a3f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fee06e8-d5a9-4552-9f69-353f9666a3f2" (UID: "3fee06e8-d5a9-4552-9f69-353f9666a3f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.127687 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cace3-776d-44d9-91d9-fdcdd5ba1c89-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.127712 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fee06e8-d5a9-4552-9f69-353f9666a3f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.127722 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4401fb39-e95c-475e-8f56-c251f9f2247f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.127732 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aff16d-f588-4c13-be4a-f2cc4bef00df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.129611 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fee06e8-d5a9-4552-9f69-353f9666a3f2-kube-api-access-9zpzr" (OuterVolumeSpecName: "kube-api-access-9zpzr") pod "3fee06e8-d5a9-4552-9f69-353f9666a3f2" (UID: "3fee06e8-d5a9-4552-9f69-353f9666a3f2"). InnerVolumeSpecName "kube-api-access-9zpzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.130179 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670cace3-776d-44d9-91d9-fdcdd5ba1c89-kube-api-access-xc9ht" (OuterVolumeSpecName: "kube-api-access-xc9ht") pod "670cace3-776d-44d9-91d9-fdcdd5ba1c89" (UID: "670cace3-776d-44d9-91d9-fdcdd5ba1c89"). InnerVolumeSpecName "kube-api-access-xc9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.130212 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8aff16d-f588-4c13-be4a-f2cc4bef00df-kube-api-access-4nwb4" (OuterVolumeSpecName: "kube-api-access-4nwb4") pod "b8aff16d-f588-4c13-be4a-f2cc4bef00df" (UID: "b8aff16d-f588-4c13-be4a-f2cc4bef00df"). InnerVolumeSpecName "kube-api-access-4nwb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.143051 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4401fb39-e95c-475e-8f56-c251f9f2247f-kube-api-access-2jwjd" (OuterVolumeSpecName: "kube-api-access-2jwjd") pod "4401fb39-e95c-475e-8f56-c251f9f2247f" (UID: "4401fb39-e95c-475e-8f56-c251f9f2247f"). InnerVolumeSpecName "kube-api-access-2jwjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.228459 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7863fb67-80a0-474b-9b3a-f75062688a55-operator-scripts\") pod \"7863fb67-80a0-474b-9b3a-f75062688a55\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.228566 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/7863fb67-80a0-474b-9b3a-f75062688a55-kube-api-access-6gxz4\") pod \"7863fb67-80a0-474b-9b3a-f75062688a55\" (UID: \"7863fb67-80a0-474b-9b3a-f75062688a55\") " Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.228935 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zpzr\" (UniqueName: \"kubernetes.io/projected/3fee06e8-d5a9-4552-9f69-353f9666a3f2-kube-api-access-9zpzr\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.228970 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jwjd\" (UniqueName: \"kubernetes.io/projected/4401fb39-e95c-475e-8f56-c251f9f2247f-kube-api-access-2jwjd\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.228981 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc9ht\" (UniqueName: \"kubernetes.io/projected/670cace3-776d-44d9-91d9-fdcdd5ba1c89-kube-api-access-xc9ht\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.228990 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nwb4\" (UniqueName: \"kubernetes.io/projected/b8aff16d-f588-4c13-be4a-f2cc4bef00df-kube-api-access-4nwb4\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.229066 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7863fb67-80a0-474b-9b3a-f75062688a55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7863fb67-80a0-474b-9b3a-f75062688a55" (UID: "7863fb67-80a0-474b-9b3a-f75062688a55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.232163 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7863fb67-80a0-474b-9b3a-f75062688a55-kube-api-access-6gxz4" (OuterVolumeSpecName: "kube-api-access-6gxz4") pod "7863fb67-80a0-474b-9b3a-f75062688a55" (UID: "7863fb67-80a0-474b-9b3a-f75062688a55"). InnerVolumeSpecName "kube-api-access-6gxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.330709 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7863fb67-80a0-474b-9b3a-f75062688a55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.330742 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/7863fb67-80a0-474b-9b3a-f75062688a55-kube-api-access-6gxz4\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.402127 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-66e1-account-create-update-mrd9n" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.402081 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-66e1-account-create-update-mrd9n" event={"ID":"b8aff16d-f588-4c13-be4a-f2cc4bef00df","Type":"ContainerDied","Data":"c55e007296ee734add2cd43668dde0dc8e3031f7b6ea249efce3917403247e3d"} Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.402986 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c55e007296ee734add2cd43668dde0dc8e3031f7b6ea249efce3917403247e3d" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.405072 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-63c5-account-create-update-rrljl" event={"ID":"3fee06e8-d5a9-4552-9f69-353f9666a3f2","Type":"ContainerDied","Data":"c8ca600c76ba9d599356bafc7036720f68ece547778cfcc8b636cc3f1e5bf324"} Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.405130 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ca600c76ba9d599356bafc7036720f68ece547778cfcc8b636cc3f1e5bf324" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.406167 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-63c5-account-create-update-rrljl" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.411090 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j8wj5" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.411121 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j8wj5" event={"ID":"7863fb67-80a0-474b-9b3a-f75062688a55","Type":"ContainerDied","Data":"f7eceed808ca0a6b89ce870b388759382121ea33328affb505132c6acfe5183f"} Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.411190 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7eceed808ca0a6b89ce870b388759382121ea33328affb505132c6acfe5183f" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.416167 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dmnkv" event={"ID":"670cace3-776d-44d9-91d9-fdcdd5ba1c89","Type":"ContainerDied","Data":"79760a68e96b2f91f14ebb320b5eb1645cd4c0edeb5766226f56b22ecb1c4c7d"} Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.416239 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79760a68e96b2f91f14ebb320b5eb1645cd4c0edeb5766226f56b22ecb1c4c7d" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.416176 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dmnkv" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.423803 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-04ac-account-create-update-8pqhd" event={"ID":"4401fb39-e95c-475e-8f56-c251f9f2247f","Type":"ContainerDied","Data":"f6cbe81494fe4f63c219c798e77dfcb61f4453426d43cc4c34b7aa7f624842d2"} Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.424010 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6cbe81494fe4f63c219c798e77dfcb61f4453426d43cc4c34b7aa7f624842d2" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.423828 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04ac-account-create-update-8pqhd" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.429603 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nsfns" event={"ID":"385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a","Type":"ContainerDied","Data":"53db2f446cd711173ecd218d71fdd7e911f7a69fa67aeeaba36a132e2b600548"} Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.429647 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53db2f446cd711173ecd218d71fdd7e911f7a69fa67aeeaba36a132e2b600548" Jan 29 08:57:54 crc kubenswrapper[5031]: I0129 08:57:54.429696 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nsfns" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.303436 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5cm9n"] Jan 29 08:57:55 crc kubenswrapper[5031]: E0129 08:57:55.304083 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4401fb39-e95c-475e-8f56-c251f9f2247f" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304096 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4401fb39-e95c-475e-8f56-c251f9f2247f" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: E0129 08:57:55.304128 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670cace3-776d-44d9-91d9-fdcdd5ba1c89" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304135 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="670cace3-776d-44d9-91d9-fdcdd5ba1c89" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: E0129 08:57:55.304151 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7863fb67-80a0-474b-9b3a-f75062688a55" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304157 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="7863fb67-80a0-474b-9b3a-f75062688a55" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: E0129 08:57:55.304172 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304178 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: E0129 08:57:55.304196 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aff16d-f588-4c13-be4a-f2cc4bef00df" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304201 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aff16d-f588-4c13-be4a-f2cc4bef00df" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: E0129 08:57:55.304215 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fee06e8-d5a9-4552-9f69-353f9666a3f2" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304220 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fee06e8-d5a9-4552-9f69-353f9666a3f2" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304408 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fee06e8-d5a9-4552-9f69-353f9666a3f2" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304420 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aff16d-f588-4c13-be4a-f2cc4bef00df" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304437 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="7863fb67-80a0-474b-9b3a-f75062688a55" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304445 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4401fb39-e95c-475e-8f56-c251f9f2247f" containerName="mariadb-account-create-update" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304453 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.304461 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="670cace3-776d-44d9-91d9-fdcdd5ba1c89" containerName="mariadb-database-create" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.305495 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.307928 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.313593 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5cm9n"] Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.449531 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmml\" (UniqueName: \"kubernetes.io/projected/0bd1f66d-dc08-44d0-8867-b901a8e38f38-kube-api-access-2rmml\") pod \"root-account-create-update-5cm9n\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.449610 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd1f66d-dc08-44d0-8867-b901a8e38f38-operator-scripts\") pod \"root-account-create-update-5cm9n\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.551626 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmml\" (UniqueName: \"kubernetes.io/projected/0bd1f66d-dc08-44d0-8867-b901a8e38f38-kube-api-access-2rmml\") pod \"root-account-create-update-5cm9n\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.551706 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd1f66d-dc08-44d0-8867-b901a8e38f38-operator-scripts\") pod \"root-account-create-update-5cm9n\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.552499 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd1f66d-dc08-44d0-8867-b901a8e38f38-operator-scripts\") pod \"root-account-create-update-5cm9n\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.568621 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmml\" (UniqueName: \"kubernetes.io/projected/0bd1f66d-dc08-44d0-8867-b901a8e38f38-kube-api-access-2rmml\") pod \"root-account-create-update-5cm9n\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:55 crc kubenswrapper[5031]: I0129 08:57:55.685008 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:56 crc kubenswrapper[5031]: I0129 08:57:56.115824 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5cm9n"] Jan 29 08:57:56 crc kubenswrapper[5031]: I0129 08:57:56.446399 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5cm9n" event={"ID":"0bd1f66d-dc08-44d0-8867-b901a8e38f38","Type":"ContainerStarted","Data":"104d9a34b5c68baed1e2bf10c3a91ab52c89d2ccce0e11a7258ba174d5aba08a"} Jan 29 08:57:56 crc kubenswrapper[5031]: I0129 08:57:56.446932 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5cm9n" event={"ID":"0bd1f66d-dc08-44d0-8867-b901a8e38f38","Type":"ContainerStarted","Data":"174141cc182c9290f51f573eeb611dce3d53c502dc9422ee06b33c48446e9c47"} Jan 29 08:57:56 crc kubenswrapper[5031]: I0129 08:57:56.468465 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5cm9n" podStartSLOduration=1.468442043 podStartE2EDuration="1.468442043s" podCreationTimestamp="2026-01-29 08:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:57:56.461165688 +0000 UTC m=+1156.960753640" watchObservedRunningTime="2026-01-29 08:57:56.468442043 +0000 UTC m=+1156.968029995" Jan 29 08:57:56 crc kubenswrapper[5031]: I0129 08:57:56.481164 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 08:57:57 crc kubenswrapper[5031]: I0129 08:57:57.454325 5031 generic.go:334] "Generic (PLEG): container finished" podID="0bd1f66d-dc08-44d0-8867-b901a8e38f38" containerID="104d9a34b5c68baed1e2bf10c3a91ab52c89d2ccce0e11a7258ba174d5aba08a" exitCode=0 Jan 29 08:57:57 crc kubenswrapper[5031]: I0129 08:57:57.454407 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5cm9n" event={"ID":"0bd1f66d-dc08-44d0-8867-b901a8e38f38","Type":"ContainerDied","Data":"104d9a34b5c68baed1e2bf10c3a91ab52c89d2ccce0e11a7258ba174d5aba08a"} Jan 29 08:57:58 crc kubenswrapper[5031]: I0129 08:57:58.784602 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:58 crc kubenswrapper[5031]: I0129 08:57:58.904547 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rmml\" (UniqueName: \"kubernetes.io/projected/0bd1f66d-dc08-44d0-8867-b901a8e38f38-kube-api-access-2rmml\") pod \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " Jan 29 08:57:58 crc kubenswrapper[5031]: I0129 08:57:58.904922 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd1f66d-dc08-44d0-8867-b901a8e38f38-operator-scripts\") pod \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\" (UID: \"0bd1f66d-dc08-44d0-8867-b901a8e38f38\") " Jan 29 08:57:58 crc kubenswrapper[5031]: I0129 08:57:58.905801 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd1f66d-dc08-44d0-8867-b901a8e38f38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bd1f66d-dc08-44d0-8867-b901a8e38f38" (UID: "0bd1f66d-dc08-44d0-8867-b901a8e38f38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:57:58 crc kubenswrapper[5031]: I0129 08:57:58.910535 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd1f66d-dc08-44d0-8867-b901a8e38f38-kube-api-access-2rmml" (OuterVolumeSpecName: "kube-api-access-2rmml") pod "0bd1f66d-dc08-44d0-8867-b901a8e38f38" (UID: "0bd1f66d-dc08-44d0-8867-b901a8e38f38"). InnerVolumeSpecName "kube-api-access-2rmml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.007415 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd1f66d-dc08-44d0-8867-b901a8e38f38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.008004 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rmml\" (UniqueName: \"kubernetes.io/projected/0bd1f66d-dc08-44d0-8867-b901a8e38f38-kube-api-access-2rmml\") on node \"crc\" DevicePath \"\"" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.116861 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9cqv6"] Jan 29 08:57:59 crc kubenswrapper[5031]: E0129 08:57:59.117522 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd1f66d-dc08-44d0-8867-b901a8e38f38" containerName="mariadb-account-create-update" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.117637 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd1f66d-dc08-44d0-8867-b901a8e38f38" containerName="mariadb-account-create-update" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.117893 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd1f66d-dc08-44d0-8867-b901a8e38f38" containerName="mariadb-account-create-update" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.118620 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.121097 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.128993 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qn4rn" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.137756 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9cqv6"] Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.212159 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztnv8\" (UniqueName: \"kubernetes.io/projected/b73ab584-3221-45b8-bc6b-d979c88e8454-kube-api-access-ztnv8\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.212262 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-config-data\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.212336 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-db-sync-config-data\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.212457 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-combined-ca-bundle\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.314266 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztnv8\" (UniqueName: \"kubernetes.io/projected/b73ab584-3221-45b8-bc6b-d979c88e8454-kube-api-access-ztnv8\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.314349 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-config-data\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.314419 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-db-sync-config-data\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.314435 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-combined-ca-bundle\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.319388 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-combined-ca-bundle\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.319753 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-config-data\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.320829 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-db-sync-config-data\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.337349 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztnv8\" (UniqueName: \"kubernetes.io/projected/b73ab584-3221-45b8-bc6b-d979c88e8454-kube-api-access-ztnv8\") pod \"glance-db-sync-9cqv6\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.438455 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9cqv6" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.475284 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5cm9n" event={"ID":"0bd1f66d-dc08-44d0-8867-b901a8e38f38","Type":"ContainerDied","Data":"174141cc182c9290f51f573eeb611dce3d53c502dc9422ee06b33c48446e9c47"} Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.475319 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5cm9n" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.475329 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="174141cc182c9290f51f573eeb611dce3d53c502dc9422ee06b33c48446e9c47" Jan 29 08:57:59 crc kubenswrapper[5031]: I0129 08:57:59.795088 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9cqv6"] Jan 29 08:57:59 crc kubenswrapper[5031]: W0129 08:57:59.808063 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb73ab584_3221_45b8_bc6b_d979c88e8454.slice/crio-8326d41136193dee04587a73f540c78e45ad43aba1e36d1eb1eff1dcaa0147e9 WatchSource:0}: Error finding container 8326d41136193dee04587a73f540c78e45ad43aba1e36d1eb1eff1dcaa0147e9: Status 404 returned error can't find the container with id 8326d41136193dee04587a73f540c78e45ad43aba1e36d1eb1eff1dcaa0147e9 Jan 29 08:58:00 crc kubenswrapper[5031]: I0129 08:58:00.345407 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-z6mp7" podUID="b34fd049-3d7e-4d5d-acfc-8e4c450bf857" containerName="ovn-controller" probeResult="failure" output=< Jan 29 08:58:00 crc kubenswrapper[5031]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 08:58:00 crc kubenswrapper[5031]: > Jan 29 08:58:00 crc kubenswrapper[5031]: I0129 08:58:00.483747 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9cqv6" event={"ID":"b73ab584-3221-45b8-bc6b-d979c88e8454","Type":"ContainerStarted","Data":"8326d41136193dee04587a73f540c78e45ad43aba1e36d1eb1eff1dcaa0147e9"} Jan 29 08:58:02 crc kubenswrapper[5031]: I0129 08:58:02.086650 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5cm9n"] Jan 29 08:58:02 crc kubenswrapper[5031]: I0129 08:58:02.096363 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5cm9n"] Jan 29 08:58:02 crc kubenswrapper[5031]: I0129 08:58:02.308169 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd1f66d-dc08-44d0-8867-b901a8e38f38" path="/var/lib/kubelet/pods/0bd1f66d-dc08-44d0-8867-b901a8e38f38/volumes" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.233067 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-z6mp7" podUID="b34fd049-3d7e-4d5d-acfc-8e4c450bf857" containerName="ovn-controller" probeResult="failure" output=< Jan 29 08:58:05 crc kubenswrapper[5031]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 08:58:05 crc kubenswrapper[5031]: > Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.331698 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.382817 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lmq4s" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.618657 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-z6mp7-config-t695j"] Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.620116 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.622423 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.636867 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-z6mp7-config-t695j"] Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.748316 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-log-ovn\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.748408 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fr9h\" (UniqueName: \"kubernetes.io/projected/3dd8507f-175d-4089-8777-4f8909938392-kube-api-access-2fr9h\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.748442 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-scripts\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.748479 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-additional-scripts\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.748523 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.748669 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run-ovn\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.850735 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run-ovn\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.850818 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-log-ovn\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.850845 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fr9h\" (UniqueName: \"kubernetes.io/projected/3dd8507f-175d-4089-8777-4f8909938392-kube-api-access-2fr9h\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.850871 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-scripts\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.850902 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-additional-scripts\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.851138 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.851455 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.851509 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run-ovn\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.851543 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-log-ovn\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.852754 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-additional-scripts\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.853910 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-scripts\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.873852 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fr9h\" (UniqueName: \"kubernetes.io/projected/3dd8507f-175d-4089-8777-4f8909938392-kube-api-access-2fr9h\") pod \"ovn-controller-z6mp7-config-t695j\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:05 crc kubenswrapper[5031]: I0129 08:58:05.951215 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:06 crc kubenswrapper[5031]: I0129 08:58:06.504488 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-z6mp7-config-t695j"] Jan 29 08:58:06 crc kubenswrapper[5031]: I0129 08:58:06.533277 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-t695j" event={"ID":"3dd8507f-175d-4089-8777-4f8909938392","Type":"ContainerStarted","Data":"4c52a99ae5d4b73fb8c862d4e68c30c83f86654473897626f156d87775185a9c"} Jan 29 08:58:06 crc kubenswrapper[5031]: I0129 08:58:06.534929 5031 generic.go:334] "Generic (PLEG): container finished" podID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerID="d308cbaf1d8f06db09add169a2872364927af335501f931edf11fcafcddf42c0" exitCode=0 Jan 29 08:58:06 crc kubenswrapper[5031]: I0129 08:58:06.534975 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64621a94-8b58-4593-a9d0-58f0dd3c5e0f","Type":"ContainerDied","Data":"d308cbaf1d8f06db09add169a2872364927af335501f931edf11fcafcddf42c0"} Jan 29 08:58:06 crc kubenswrapper[5031]: I0129 08:58:06.536545 5031 generic.go:334] "Generic (PLEG): container finished" podID="a9e34c17-fba9-4efa-8912-ede69c516560" containerID="248333fd4f79e20db6d18e37d447343ffb055ab9198e066636271c6a0039cfcd" exitCode=0 Jan 29 08:58:06 crc kubenswrapper[5031]: I0129 08:58:06.536576 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9e34c17-fba9-4efa-8912-ede69c516560","Type":"ContainerDied","Data":"248333fd4f79e20db6d18e37d447343ffb055ab9198e066636271c6a0039cfcd"} Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.103803 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f748g"] Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.105203 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.129216 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f748g"] Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.235442 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.236735 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b1baa5-afbb-47ea-837a-8a69a979a417-operator-scripts\") pod \"root-account-create-update-f748g\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.237039 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgtbp\" (UniqueName: \"kubernetes.io/projected/97b1baa5-afbb-47ea-837a-8a69a979a417-kube-api-access-zgtbp\") pod \"root-account-create-update-f748g\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.340546 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgtbp\" (UniqueName: \"kubernetes.io/projected/97b1baa5-afbb-47ea-837a-8a69a979a417-kube-api-access-zgtbp\") pod \"root-account-create-update-f748g\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.340669 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b1baa5-afbb-47ea-837a-8a69a979a417-operator-scripts\") pod \"root-account-create-update-f748g\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.341527 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b1baa5-afbb-47ea-837a-8a69a979a417-operator-scripts\") pod \"root-account-create-update-f748g\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.366119 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgtbp\" (UniqueName: \"kubernetes.io/projected/97b1baa5-afbb-47ea-837a-8a69a979a417-kube-api-access-zgtbp\") pod \"root-account-create-update-f748g\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.434973 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f748g" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.552293 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64621a94-8b58-4593-a9d0-58f0dd3c5e0f","Type":"ContainerStarted","Data":"80604cfe1e2c531a86bec2175bc5f49c52d4518f6371c416470cd0abb4d2a830"} Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.552649 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.557711 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9e34c17-fba9-4efa-8912-ede69c516560","Type":"ContainerStarted","Data":"1e5eb5f612c550d875223b863d54744bd60785ca68ceb3514d702eb8f5ac5363"} Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.557885 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.561485 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-t695j" event={"ID":"3dd8507f-175d-4089-8777-4f8909938392","Type":"ContainerStarted","Data":"7e19c038c83184df7760b1682391971e92f4c14e32928e169beba330b490c7d2"} Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.602457 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.648216162 podStartE2EDuration="1m24.60243451s" podCreationTimestamp="2026-01-29 08:56:43 +0000 UTC" firstStartedPulling="2026-01-29 08:56:46.911962853 +0000 UTC m=+1087.411550805" lastFinishedPulling="2026-01-29 08:57:32.866181201 +0000 UTC m=+1133.365769153" observedRunningTime="2026-01-29 08:58:07.597484786 +0000 UTC m=+1168.097072758" watchObservedRunningTime="2026-01-29 08:58:07.60243451 +0000 UTC m=+1168.102022462" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.622704 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-z6mp7-config-t695j" podStartSLOduration=2.622680873 podStartE2EDuration="2.622680873s" podCreationTimestamp="2026-01-29 08:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:58:07.616284572 +0000 UTC m=+1168.115872524" watchObservedRunningTime="2026-01-29 08:58:07.622680873 +0000 UTC m=+1168.122268825" Jan 29 08:58:07 crc kubenswrapper[5031]: I0129 08:58:07.690118 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.197963136 podStartE2EDuration="1m23.690102035s" podCreationTimestamp="2026-01-29 08:56:44 +0000 UTC" firstStartedPulling="2026-01-29 08:56:47.368155194 +0000 UTC m=+1087.867743136" lastFinishedPulling="2026-01-29 08:57:32.860294083 +0000 UTC m=+1133.359882035" observedRunningTime="2026-01-29 08:58:07.687671049 +0000 UTC m=+1168.187259001" watchObservedRunningTime="2026-01-29 08:58:07.690102035 +0000 UTC m=+1168.189689987" Jan 29 08:58:08 crc kubenswrapper[5031]: I0129 08:58:08.528536 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:58:08 crc kubenswrapper[5031]: I0129 08:58:08.529014 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:58:08 crc kubenswrapper[5031]: I0129 08:58:08.529090 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 08:58:08 crc kubenswrapper[5031]: I0129 08:58:08.530155 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e25b3544ed82f73d3e69370fae71f9310174a457f060c5ae77619bf418f1fb57"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:58:08 crc kubenswrapper[5031]: I0129 08:58:08.530255 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://e25b3544ed82f73d3e69370fae71f9310174a457f060c5ae77619bf418f1fb57" gracePeriod=600 Jan 29 08:58:08 crc kubenswrapper[5031]: I0129 08:58:08.791930 5031 generic.go:334] "Generic (PLEG): container finished" podID="3dd8507f-175d-4089-8777-4f8909938392" containerID="7e19c038c83184df7760b1682391971e92f4c14e32928e169beba330b490c7d2" exitCode=0 Jan 29 08:58:08 crc kubenswrapper[5031]: I0129 08:58:08.792074 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-t695j" event={"ID":"3dd8507f-175d-4089-8777-4f8909938392","Type":"ContainerDied","Data":"7e19c038c83184df7760b1682391971e92f4c14e32928e169beba330b490c7d2"} Jan 29 08:58:09 crc kubenswrapper[5031]: I0129 08:58:09.810287 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="e25b3544ed82f73d3e69370fae71f9310174a457f060c5ae77619bf418f1fb57" exitCode=0 Jan 29 08:58:09 crc kubenswrapper[5031]: I0129 08:58:09.810556 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"e25b3544ed82f73d3e69370fae71f9310174a457f060c5ae77619bf418f1fb57"} Jan 29 08:58:09 crc kubenswrapper[5031]: I0129 08:58:09.810596 5031 scope.go:117] "RemoveContainer" containerID="16b92f6fdefb0958d7a7c20f1e33caf653c7a4682955f7b154681a53ac8f22bb" Jan 29 08:58:10 crc kubenswrapper[5031]: I0129 08:58:10.245664 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-z6mp7" Jan 29 08:58:16 crc kubenswrapper[5031]: I0129 08:58:16.036240 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 29 08:58:17 crc kubenswrapper[5031]: E0129 08:58:17.366528 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 29 08:58:17 crc kubenswrapper[5031]: E0129 08:58:17.366952 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztnv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9cqv6_openstack(b73ab584-3221-45b8-bc6b-d979c88e8454): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:58:17 crc kubenswrapper[5031]: E0129 08:58:17.368152 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9cqv6" podUID="b73ab584-3221-45b8-bc6b-d979c88e8454" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.478850 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532245 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-log-ovn\") pod \"3dd8507f-175d-4089-8777-4f8909938392\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532320 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run\") pod \"3dd8507f-175d-4089-8777-4f8909938392\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532384 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fr9h\" (UniqueName: \"kubernetes.io/projected/3dd8507f-175d-4089-8777-4f8909938392-kube-api-access-2fr9h\") pod \"3dd8507f-175d-4089-8777-4f8909938392\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532405 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-additional-scripts\") pod \"3dd8507f-175d-4089-8777-4f8909938392\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532411 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3dd8507f-175d-4089-8777-4f8909938392" (UID: "3dd8507f-175d-4089-8777-4f8909938392"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532424 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-scripts\") pod \"3dd8507f-175d-4089-8777-4f8909938392\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532444 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run-ovn\") pod \"3dd8507f-175d-4089-8777-4f8909938392\" (UID: \"3dd8507f-175d-4089-8777-4f8909938392\") " Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532448 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run" (OuterVolumeSpecName: "var-run") pod "3dd8507f-175d-4089-8777-4f8909938392" (UID: "3dd8507f-175d-4089-8777-4f8909938392"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532586 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3dd8507f-175d-4089-8777-4f8909938392" (UID: "3dd8507f-175d-4089-8777-4f8909938392"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532925 5031 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532940 5031 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.532950 5031 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3dd8507f-175d-4089-8777-4f8909938392-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.533292 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3dd8507f-175d-4089-8777-4f8909938392" (UID: "3dd8507f-175d-4089-8777-4f8909938392"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.533585 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-scripts" (OuterVolumeSpecName: "scripts") pod "3dd8507f-175d-4089-8777-4f8909938392" (UID: "3dd8507f-175d-4089-8777-4f8909938392"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.584052 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd8507f-175d-4089-8777-4f8909938392-kube-api-access-2fr9h" (OuterVolumeSpecName: "kube-api-access-2fr9h") pod "3dd8507f-175d-4089-8777-4f8909938392" (UID: "3dd8507f-175d-4089-8777-4f8909938392"). InnerVolumeSpecName "kube-api-access-2fr9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.635816 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fr9h\" (UniqueName: \"kubernetes.io/projected/3dd8507f-175d-4089-8777-4f8909938392-kube-api-access-2fr9h\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.635918 5031 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.636137 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dd8507f-175d-4089-8777-4f8909938392-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.811819 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f748g"] Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.878952 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f748g" event={"ID":"97b1baa5-afbb-47ea-837a-8a69a979a417","Type":"ContainerStarted","Data":"b8f32c23e6fc5c4ea68b65985a4db63fc6efd314bd64f819a0ee9fe48ee9f98d"} Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.880880 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"968b7ae674e15f331a40354ae3280aca1a2d384b002cb22e9f641c2b3f0a41ed"} Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.884766 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-t695j" Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.884779 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-t695j" event={"ID":"3dd8507f-175d-4089-8777-4f8909938392","Type":"ContainerDied","Data":"4c52a99ae5d4b73fb8c862d4e68c30c83f86654473897626f156d87775185a9c"} Jan 29 08:58:17 crc kubenswrapper[5031]: I0129 08:58:17.885038 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c52a99ae5d4b73fb8c862d4e68c30c83f86654473897626f156d87775185a9c" Jan 29 08:58:17 crc kubenswrapper[5031]: E0129 08:58:17.886345 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-9cqv6" podUID="b73ab584-3221-45b8-bc6b-d979c88e8454" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.587296 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-z6mp7-config-t695j"] Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.595157 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-z6mp7-config-t695j"] Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.689286 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-z6mp7-config-45546"] Jan 29 08:58:18 crc kubenswrapper[5031]: E0129 08:58:18.689675 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd8507f-175d-4089-8777-4f8909938392" containerName="ovn-config" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.689691 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd8507f-175d-4089-8777-4f8909938392" containerName="ovn-config" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.689839 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd8507f-175d-4089-8777-4f8909938392" containerName="ovn-config" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.690386 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.693530 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.703842 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-z6mp7-config-45546"] Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.753248 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-additional-scripts\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.753821 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run-ovn\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.753983 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-log-ovn\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.754099 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67zj\" (UniqueName: \"kubernetes.io/projected/164082a8-b09f-4763-a020-7cc8e7d82386-kube-api-access-f67zj\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.754284 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.754527 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-scripts\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.855561 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67zj\" (UniqueName: \"kubernetes.io/projected/164082a8-b09f-4763-a020-7cc8e7d82386-kube-api-access-f67zj\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.855627 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.855678 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-scripts\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.855721 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-additional-scripts\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.855741 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run-ovn\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.855775 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-log-ovn\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.856065 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-log-ovn\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.856246 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run-ovn\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.856265 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.856760 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-additional-scripts\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.858782 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-scripts\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.877692 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67zj\" (UniqueName: \"kubernetes.io/projected/164082a8-b09f-4763-a020-7cc8e7d82386-kube-api-access-f67zj\") pod \"ovn-controller-z6mp7-config-45546\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.893890 5031 generic.go:334] "Generic (PLEG): container finished" podID="97b1baa5-afbb-47ea-837a-8a69a979a417" containerID="2115dab52bd1173809a834812d002b98a57b060bfa7b57239e9e2aaa5832cbff" exitCode=0 Jan 29 08:58:18 crc kubenswrapper[5031]: I0129 08:58:18.894204 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f748g" event={"ID":"97b1baa5-afbb-47ea-837a-8a69a979a417","Type":"ContainerDied","Data":"2115dab52bd1173809a834812d002b98a57b060bfa7b57239e9e2aaa5832cbff"} Jan 29 08:58:19 crc kubenswrapper[5031]: I0129 08:58:19.017827 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:19 crc kubenswrapper[5031]: I0129 08:58:19.480907 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-z6mp7-config-45546"] Jan 29 08:58:19 crc kubenswrapper[5031]: I0129 08:58:19.909330 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-45546" event={"ID":"164082a8-b09f-4763-a020-7cc8e7d82386","Type":"ContainerStarted","Data":"5e9ada4ea25ed430b28581f16d3a62a58f7caabea8e30601e8d306ccfe9c6c5b"} Jan 29 08:58:19 crc kubenswrapper[5031]: I0129 08:58:19.909838 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-45546" event={"ID":"164082a8-b09f-4763-a020-7cc8e7d82386","Type":"ContainerStarted","Data":"3bba81080b52b3f419b714e1a265032fbf3118fde7642854bc020418bb4c0bb9"} Jan 29 08:58:19 crc kubenswrapper[5031]: I0129 08:58:19.937474 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-z6mp7-config-45546" podStartSLOduration=1.93745265 podStartE2EDuration="1.93745265s" podCreationTimestamp="2026-01-29 08:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:58:19.931699026 +0000 UTC m=+1180.431286978" watchObservedRunningTime="2026-01-29 08:58:19.93745265 +0000 UTC m=+1180.437040602" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.213544 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f748g" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.309053 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dd8507f-175d-4089-8777-4f8909938392" path="/var/lib/kubelet/pods/3dd8507f-175d-4089-8777-4f8909938392/volumes" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.377544 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b1baa5-afbb-47ea-837a-8a69a979a417-operator-scripts\") pod \"97b1baa5-afbb-47ea-837a-8a69a979a417\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.377595 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgtbp\" (UniqueName: \"kubernetes.io/projected/97b1baa5-afbb-47ea-837a-8a69a979a417-kube-api-access-zgtbp\") pod \"97b1baa5-afbb-47ea-837a-8a69a979a417\" (UID: \"97b1baa5-afbb-47ea-837a-8a69a979a417\") " Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.379017 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b1baa5-afbb-47ea-837a-8a69a979a417-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97b1baa5-afbb-47ea-837a-8a69a979a417" (UID: "97b1baa5-afbb-47ea-837a-8a69a979a417"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.385875 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b1baa5-afbb-47ea-837a-8a69a979a417-kube-api-access-zgtbp" (OuterVolumeSpecName: "kube-api-access-zgtbp") pod "97b1baa5-afbb-47ea-837a-8a69a979a417" (UID: "97b1baa5-afbb-47ea-837a-8a69a979a417"). InnerVolumeSpecName "kube-api-access-zgtbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.479942 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b1baa5-afbb-47ea-837a-8a69a979a417-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.480015 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgtbp\" (UniqueName: \"kubernetes.io/projected/97b1baa5-afbb-47ea-837a-8a69a979a417-kube-api-access-zgtbp\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.918962 5031 generic.go:334] "Generic (PLEG): container finished" podID="164082a8-b09f-4763-a020-7cc8e7d82386" containerID="5e9ada4ea25ed430b28581f16d3a62a58f7caabea8e30601e8d306ccfe9c6c5b" exitCode=0 Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.919009 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-45546" event={"ID":"164082a8-b09f-4763-a020-7cc8e7d82386","Type":"ContainerDied","Data":"5e9ada4ea25ed430b28581f16d3a62a58f7caabea8e30601e8d306ccfe9c6c5b"} Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.921170 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f748g" event={"ID":"97b1baa5-afbb-47ea-837a-8a69a979a417","Type":"ContainerDied","Data":"b8f32c23e6fc5c4ea68b65985a4db63fc6efd314bd64f819a0ee9fe48ee9f98d"} Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.921200 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f748g" Jan 29 08:58:20 crc kubenswrapper[5031]: I0129 08:58:20.921209 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f32c23e6fc5c4ea68b65985a4db63fc6efd314bd64f819a0ee9fe48ee9f98d" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.307413 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.411717 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run-ovn\") pod \"164082a8-b09f-4763-a020-7cc8e7d82386\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.411792 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run\") pod \"164082a8-b09f-4763-a020-7cc8e7d82386\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.411845 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "164082a8-b09f-4763-a020-7cc8e7d82386" (UID: "164082a8-b09f-4763-a020-7cc8e7d82386"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.411917 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run" (OuterVolumeSpecName: "var-run") pod "164082a8-b09f-4763-a020-7cc8e7d82386" (UID: "164082a8-b09f-4763-a020-7cc8e7d82386"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.411947 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-additional-scripts\") pod \"164082a8-b09f-4763-a020-7cc8e7d82386\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412010 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67zj\" (UniqueName: \"kubernetes.io/projected/164082a8-b09f-4763-a020-7cc8e7d82386-kube-api-access-f67zj\") pod \"164082a8-b09f-4763-a020-7cc8e7d82386\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412076 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-scripts\") pod \"164082a8-b09f-4763-a020-7cc8e7d82386\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412127 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-log-ovn\") pod \"164082a8-b09f-4763-a020-7cc8e7d82386\" (UID: \"164082a8-b09f-4763-a020-7cc8e7d82386\") " Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412332 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "164082a8-b09f-4763-a020-7cc8e7d82386" (UID: "164082a8-b09f-4763-a020-7cc8e7d82386"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412582 5031 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412610 5031 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412622 5031 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164082a8-b09f-4763-a020-7cc8e7d82386-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.412894 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "164082a8-b09f-4763-a020-7cc8e7d82386" (UID: "164082a8-b09f-4763-a020-7cc8e7d82386"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.413237 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-scripts" (OuterVolumeSpecName: "scripts") pod "164082a8-b09f-4763-a020-7cc8e7d82386" (UID: "164082a8-b09f-4763-a020-7cc8e7d82386"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.418573 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164082a8-b09f-4763-a020-7cc8e7d82386-kube-api-access-f67zj" (OuterVolumeSpecName: "kube-api-access-f67zj") pod "164082a8-b09f-4763-a020-7cc8e7d82386" (UID: "164082a8-b09f-4763-a020-7cc8e7d82386"). InnerVolumeSpecName "kube-api-access-f67zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.513754 5031 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.513798 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67zj\" (UniqueName: \"kubernetes.io/projected/164082a8-b09f-4763-a020-7cc8e7d82386-kube-api-access-f67zj\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.513811 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164082a8-b09f-4763-a020-7cc8e7d82386-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.936568 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-z6mp7-config-45546" event={"ID":"164082a8-b09f-4763-a020-7cc8e7d82386","Type":"ContainerDied","Data":"3bba81080b52b3f419b714e1a265032fbf3118fde7642854bc020418bb4c0bb9"} Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.936615 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bba81080b52b3f419b714e1a265032fbf3118fde7642854bc020418bb4c0bb9" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.936670 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-z6mp7-config-45546" Jan 29 08:58:22 crc kubenswrapper[5031]: I0129 08:58:22.999270 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-z6mp7-config-45546"] Jan 29 08:58:23 crc kubenswrapper[5031]: I0129 08:58:23.005793 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-z6mp7-config-45546"] Jan 29 08:58:24 crc kubenswrapper[5031]: I0129 08:58:24.291765 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="164082a8-b09f-4763-a020-7cc8e7d82386" path="/var/lib/kubelet/pods/164082a8-b09f-4763-a020-7cc8e7d82386/volumes" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.412607 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.804786 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-z4jdg"] Jan 29 08:58:25 crc kubenswrapper[5031]: E0129 08:58:25.805197 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164082a8-b09f-4763-a020-7cc8e7d82386" containerName="ovn-config" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.805219 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="164082a8-b09f-4763-a020-7cc8e7d82386" containerName="ovn-config" Jan 29 08:58:25 crc kubenswrapper[5031]: E0129 08:58:25.805228 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b1baa5-afbb-47ea-837a-8a69a979a417" containerName="mariadb-account-create-update" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.805235 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b1baa5-afbb-47ea-837a-8a69a979a417" containerName="mariadb-account-create-update" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.805417 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="164082a8-b09f-4763-a020-7cc8e7d82386" containerName="ovn-config" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.805436 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b1baa5-afbb-47ea-837a-8a69a979a417" containerName="mariadb-account-create-update" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.805940 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.829765 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z4jdg"] Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.926068 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-72de-account-create-update-9fgbw"] Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.927344 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.929772 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.937825 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-72de-account-create-update-9fgbw"] Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.967851 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87gth\" (UniqueName: \"kubernetes.io/projected/ba0a8bed-92bc-406b-b79a-f922b405c505-kube-api-access-87gth\") pod \"barbican-db-create-z4jdg\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:25 crc kubenswrapper[5031]: I0129 08:58:25.968008 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba0a8bed-92bc-406b-b79a-f922b405c505-operator-scripts\") pod \"barbican-db-create-z4jdg\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.001150 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5nwnb"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.002278 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.017293 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5nwnb"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.029017 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3c6d-account-create-update-vkqsr"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.030323 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.033830 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.045407 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.070715 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gfq4\" (UniqueName: \"kubernetes.io/projected/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-kube-api-access-4gfq4\") pod \"barbican-72de-account-create-update-9fgbw\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.070780 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87gth\" (UniqueName: \"kubernetes.io/projected/ba0a8bed-92bc-406b-b79a-f922b405c505-kube-api-access-87gth\") pod \"barbican-db-create-z4jdg\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.070830 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-operator-scripts\") pod \"barbican-72de-account-create-update-9fgbw\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.070877 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba0a8bed-92bc-406b-b79a-f922b405c505-operator-scripts\") pod \"barbican-db-create-z4jdg\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.071665 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba0a8bed-92bc-406b-b79a-f922b405c505-operator-scripts\") pod \"barbican-db-create-z4jdg\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.082792 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3c6d-account-create-update-vkqsr"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.116242 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87gth\" (UniqueName: \"kubernetes.io/projected/ba0a8bed-92bc-406b-b79a-f922b405c505-kube-api-access-87gth\") pod \"barbican-db-create-z4jdg\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.127115 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.172625 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-operator-scripts\") pod \"barbican-72de-account-create-update-9fgbw\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.172776 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd1bc99f-ba99-439c-b71b-9652c34f6248-operator-scripts\") pod \"cinder-3c6d-account-create-update-vkqsr\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.172850 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008e23fd-2d25-4f4f-bf2e-441c840521e4-operator-scripts\") pod \"cinder-db-create-5nwnb\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.172910 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6bw\" (UniqueName: \"kubernetes.io/projected/008e23fd-2d25-4f4f-bf2e-441c840521e4-kube-api-access-6z6bw\") pod \"cinder-db-create-5nwnb\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.172934 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h827b\" (UniqueName: \"kubernetes.io/projected/bd1bc99f-ba99-439c-b71b-9652c34f6248-kube-api-access-h827b\") pod \"cinder-3c6d-account-create-update-vkqsr\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.172964 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gfq4\" (UniqueName: \"kubernetes.io/projected/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-kube-api-access-4gfq4\") pod \"barbican-72de-account-create-update-9fgbw\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.173789 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-operator-scripts\") pod \"barbican-72de-account-create-update-9fgbw\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.206895 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-k647w"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.207946 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.259502 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.259682 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.259723 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.259926 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4dbn2" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.270215 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gfq4\" (UniqueName: \"kubernetes.io/projected/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-kube-api-access-4gfq4\") pod \"barbican-72de-account-create-update-9fgbw\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.274281 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd1bc99f-ba99-439c-b71b-9652c34f6248-operator-scripts\") pod \"cinder-3c6d-account-create-update-vkqsr\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.274355 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008e23fd-2d25-4f4f-bf2e-441c840521e4-operator-scripts\") pod \"cinder-db-create-5nwnb\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.274405 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z6bw\" (UniqueName: \"kubernetes.io/projected/008e23fd-2d25-4f4f-bf2e-441c840521e4-kube-api-access-6z6bw\") pod \"cinder-db-create-5nwnb\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.274423 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h827b\" (UniqueName: \"kubernetes.io/projected/bd1bc99f-ba99-439c-b71b-9652c34f6248-kube-api-access-h827b\") pod \"cinder-3c6d-account-create-update-vkqsr\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.275401 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd1bc99f-ba99-439c-b71b-9652c34f6248-operator-scripts\") pod \"cinder-3c6d-account-create-update-vkqsr\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.275901 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008e23fd-2d25-4f4f-bf2e-441c840521e4-operator-scripts\") pod \"cinder-db-create-5nwnb\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.300572 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k647w"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.335175 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z6bw\" (UniqueName: \"kubernetes.io/projected/008e23fd-2d25-4f4f-bf2e-441c840521e4-kube-api-access-6z6bw\") pod \"cinder-db-create-5nwnb\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.336224 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h827b\" (UniqueName: \"kubernetes.io/projected/bd1bc99f-ba99-439c-b71b-9652c34f6248-kube-api-access-h827b\") pod \"cinder-3c6d-account-create-update-vkqsr\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.338467 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3512-account-create-update-44zf5"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.339623 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.342482 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.347047 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zc5qg"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.349270 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.364975 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.376077 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-combined-ca-bundle\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.376165 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkwth\" (UniqueName: \"kubernetes.io/projected/085d265b-4cdb-44ae-8a06-fa3962a5546b-kube-api-access-kkwth\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.376237 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-config-data\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.466814 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zc5qg"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.475106 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3512-account-create-update-44zf5"] Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.478147 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-combined-ca-bundle\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.478218 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkwth\" (UniqueName: \"kubernetes.io/projected/085d265b-4cdb-44ae-8a06-fa3962a5546b-kube-api-access-kkwth\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.478254 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d41a2f87-dbe3-4248-80d3-70df130c9a2d-operator-scripts\") pod \"neutron-db-create-zc5qg\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.478297 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-config-data\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.478421 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405ca10-f433-4290-a19b-5bb83028e6ae-operator-scripts\") pod \"neutron-3512-account-create-update-44zf5\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.478450 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72nc2\" (UniqueName: \"kubernetes.io/projected/d41a2f87-dbe3-4248-80d3-70df130c9a2d-kube-api-access-72nc2\") pod \"neutron-db-create-zc5qg\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.478477 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6h9h\" (UniqueName: \"kubernetes.io/projected/0405ca10-f433-4290-a19b-5bb83028e6ae-kube-api-access-l6h9h\") pod \"neutron-3512-account-create-update-44zf5\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.483897 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-combined-ca-bundle\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.484277 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-config-data\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.501977 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkwth\" (UniqueName: \"kubernetes.io/projected/085d265b-4cdb-44ae-8a06-fa3962a5546b-kube-api-access-kkwth\") pod \"keystone-db-sync-k647w\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.544274 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.580313 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405ca10-f433-4290-a19b-5bb83028e6ae-operator-scripts\") pod \"neutron-3512-account-create-update-44zf5\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.580666 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6h9h\" (UniqueName: \"kubernetes.io/projected/0405ca10-f433-4290-a19b-5bb83028e6ae-kube-api-access-l6h9h\") pod \"neutron-3512-account-create-update-44zf5\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.581351 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405ca10-f433-4290-a19b-5bb83028e6ae-operator-scripts\") pod \"neutron-3512-account-create-update-44zf5\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.581802 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72nc2\" (UniqueName: \"kubernetes.io/projected/d41a2f87-dbe3-4248-80d3-70df130c9a2d-kube-api-access-72nc2\") pod \"neutron-db-create-zc5qg\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.582162 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d41a2f87-dbe3-4248-80d3-70df130c9a2d-operator-scripts\") pod \"neutron-db-create-zc5qg\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.582911 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d41a2f87-dbe3-4248-80d3-70df130c9a2d-operator-scripts\") pod \"neutron-db-create-zc5qg\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.609113 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72nc2\" (UniqueName: \"kubernetes.io/projected/d41a2f87-dbe3-4248-80d3-70df130c9a2d-kube-api-access-72nc2\") pod \"neutron-db-create-zc5qg\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.610814 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6h9h\" (UniqueName: \"kubernetes.io/projected/0405ca10-f433-4290-a19b-5bb83028e6ae-kube-api-access-l6h9h\") pod \"neutron-3512-account-create-update-44zf5\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.611296 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.631094 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.701630 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:26 crc kubenswrapper[5031]: I0129 08:58:26.716459 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:27 crc kubenswrapper[5031]: I0129 08:58:27.635064 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z4jdg"] Jan 29 08:58:27 crc kubenswrapper[5031]: I0129 08:58:27.664971 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5nwnb"] Jan 29 08:58:27 crc kubenswrapper[5031]: I0129 08:58:27.701084 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-72de-account-create-update-9fgbw"] Jan 29 08:58:27 crc kubenswrapper[5031]: I0129 08:58:27.741048 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zc5qg"] Jan 29 08:58:27 crc kubenswrapper[5031]: I0129 08:58:27.754036 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3c6d-account-create-update-vkqsr"] Jan 29 08:58:27 crc kubenswrapper[5031]: I0129 08:58:27.816821 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3512-account-create-update-44zf5"] Jan 29 08:58:27 crc kubenswrapper[5031]: W0129 08:58:27.819502 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd41a2f87_dbe3_4248_80d3_70df130c9a2d.slice/crio-5b37372053e6556c31fc826abeb31fe88a1c1d8bb8af0efa8076ddc548d6a54b WatchSource:0}: Error finding container 5b37372053e6556c31fc826abeb31fe88a1c1d8bb8af0efa8076ddc548d6a54b: Status 404 returned error can't find the container with id 5b37372053e6556c31fc826abeb31fe88a1c1d8bb8af0efa8076ddc548d6a54b Jan 29 08:58:27 crc kubenswrapper[5031]: I0129 08:58:27.897587 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k647w"] Jan 29 08:58:27 crc kubenswrapper[5031]: W0129 08:58:27.938589 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0405ca10_f433_4290_a19b_5bb83028e6ae.slice/crio-8846c2eefe8fe814ff39ffb38f6274a13c6b0b8e4bbfd8f5213820d1151e055b WatchSource:0}: Error finding container 8846c2eefe8fe814ff39ffb38f6274a13c6b0b8e4bbfd8f5213820d1151e055b: Status 404 returned error can't find the container with id 8846c2eefe8fe814ff39ffb38f6274a13c6b0b8e4bbfd8f5213820d1151e055b Jan 29 08:58:28 crc kubenswrapper[5031]: I0129 08:58:28.004122 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z4jdg" event={"ID":"ba0a8bed-92bc-406b-b79a-f922b405c505","Type":"ContainerStarted","Data":"83385e9bb72ae63ff356fc05c197bcc69703343012a3366cb9ca190cf72c52f3"} Jan 29 08:58:28 crc kubenswrapper[5031]: I0129 08:58:28.007413 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3512-account-create-update-44zf5" event={"ID":"0405ca10-f433-4290-a19b-5bb83028e6ae","Type":"ContainerStarted","Data":"8846c2eefe8fe814ff39ffb38f6274a13c6b0b8e4bbfd8f5213820d1151e055b"} Jan 29 08:58:28 crc kubenswrapper[5031]: I0129 08:58:28.010064 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5nwnb" event={"ID":"008e23fd-2d25-4f4f-bf2e-441c840521e4","Type":"ContainerStarted","Data":"c28b67a158d74f15e6ff0c9e4ace935d61249263bf61be177070126dbb13d35e"} Jan 29 08:58:28 crc kubenswrapper[5031]: I0129 08:58:28.013663 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zc5qg" event={"ID":"d41a2f87-dbe3-4248-80d3-70df130c9a2d","Type":"ContainerStarted","Data":"5b37372053e6556c31fc826abeb31fe88a1c1d8bb8af0efa8076ddc548d6a54b"} Jan 29 08:58:28 crc kubenswrapper[5031]: I0129 08:58:28.015668 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3c6d-account-create-update-vkqsr" event={"ID":"bd1bc99f-ba99-439c-b71b-9652c34f6248","Type":"ContainerStarted","Data":"c36f80dfd55d7ece22e4d52024aaed630b738eeb39097067fc72a4872c706e23"} Jan 29 08:58:28 crc kubenswrapper[5031]: I0129 08:58:28.018092 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-72de-account-create-update-9fgbw" event={"ID":"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49","Type":"ContainerStarted","Data":"cd55efefc9f1ada3cd3197ace0c2c02225a1dfb8ee897dce4cadc71e45f5b52e"} Jan 29 08:58:28 crc kubenswrapper[5031]: I0129 08:58:28.019622 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k647w" event={"ID":"085d265b-4cdb-44ae-8a06-fa3962a5546b","Type":"ContainerStarted","Data":"8940c19921e276b967cf5c05a728c3851def46149b72ec7a1a312a9327d38a6d"} Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.031598 5031 generic.go:334] "Generic (PLEG): container finished" podID="008e23fd-2d25-4f4f-bf2e-441c840521e4" containerID="e5ce0336f09c671175d918727f693dee3368638b73f1a7be8e21276c01b55de4" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.031648 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5nwnb" event={"ID":"008e23fd-2d25-4f4f-bf2e-441c840521e4","Type":"ContainerDied","Data":"e5ce0336f09c671175d918727f693dee3368638b73f1a7be8e21276c01b55de4"} Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.035606 5031 generic.go:334] "Generic (PLEG): container finished" podID="d41a2f87-dbe3-4248-80d3-70df130c9a2d" containerID="f9a892f4c750bf4858ea30a933b87615db491108148098369028287b7790229a" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.035653 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zc5qg" event={"ID":"d41a2f87-dbe3-4248-80d3-70df130c9a2d","Type":"ContainerDied","Data":"f9a892f4c750bf4858ea30a933b87615db491108148098369028287b7790229a"} Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.038441 5031 generic.go:334] "Generic (PLEG): container finished" podID="bd1bc99f-ba99-439c-b71b-9652c34f6248" containerID="f081159778cdd195a946d271bf87e8ef2b36c2073dd9dcb40cc0729d08c84321" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.038494 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3c6d-account-create-update-vkqsr" event={"ID":"bd1bc99f-ba99-439c-b71b-9652c34f6248","Type":"ContainerDied","Data":"f081159778cdd195a946d271bf87e8ef2b36c2073dd9dcb40cc0729d08c84321"} Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.040564 5031 generic.go:334] "Generic (PLEG): container finished" podID="bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49" containerID="c4735564d79a518e62d8f8ce6c55f8d95e52d4c497fd4c062d0434952438b4de" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.040616 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-72de-account-create-update-9fgbw" event={"ID":"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49","Type":"ContainerDied","Data":"c4735564d79a518e62d8f8ce6c55f8d95e52d4c497fd4c062d0434952438b4de"} Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.042884 5031 generic.go:334] "Generic (PLEG): container finished" podID="ba0a8bed-92bc-406b-b79a-f922b405c505" containerID="fafa7e1e88abda2228f34616541cda446adb5c89fa3c0827f9e04718c8668293" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.042932 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z4jdg" event={"ID":"ba0a8bed-92bc-406b-b79a-f922b405c505","Type":"ContainerDied","Data":"fafa7e1e88abda2228f34616541cda446adb5c89fa3c0827f9e04718c8668293"} Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.045915 5031 generic.go:334] "Generic (PLEG): container finished" podID="0405ca10-f433-4290-a19b-5bb83028e6ae" containerID="8a5fad5f695365328f59e95f5299e07e8b7b5f7ae4cc9fae45767a6d7ddddf0a" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[5031]: I0129 08:58:29.045965 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3512-account-create-update-44zf5" event={"ID":"0405ca10-f433-4290-a19b-5bb83028e6ae","Type":"ContainerDied","Data":"8a5fad5f695365328f59e95f5299e07e8b7b5f7ae4cc9fae45767a6d7ddddf0a"} Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.057776 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.066978 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.078164 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.109767 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.131682 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.131945 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.141074 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3c6d-account-create-update-vkqsr" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.142084 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3c6d-account-create-update-vkqsr" event={"ID":"bd1bc99f-ba99-439c-b71b-9652c34f6248","Type":"ContainerDied","Data":"c36f80dfd55d7ece22e4d52024aaed630b738eeb39097067fc72a4872c706e23"} Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.142139 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c36f80dfd55d7ece22e4d52024aaed630b738eeb39097067fc72a4872c706e23" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.160004 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-72de-account-create-update-9fgbw" event={"ID":"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49","Type":"ContainerDied","Data":"cd55efefc9f1ada3cd3197ace0c2c02225a1dfb8ee897dce4cadc71e45f5b52e"} Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.161032 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd55efefc9f1ada3cd3197ace0c2c02225a1dfb8ee897dce4cadc71e45f5b52e" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.160635 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-72de-account-create-update-9fgbw" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.164736 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z4jdg" event={"ID":"ba0a8bed-92bc-406b-b79a-f922b405c505","Type":"ContainerDied","Data":"83385e9bb72ae63ff356fc05c197bcc69703343012a3366cb9ca190cf72c52f3"} Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.164781 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83385e9bb72ae63ff356fc05c197bcc69703343012a3366cb9ca190cf72c52f3" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.164868 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z4jdg" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.168537 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5nwnb" event={"ID":"008e23fd-2d25-4f4f-bf2e-441c840521e4","Type":"ContainerDied","Data":"c28b67a158d74f15e6ff0c9e4ace935d61249263bf61be177070126dbb13d35e"} Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.168584 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28b67a158d74f15e6ff0c9e4ace935d61249263bf61be177070126dbb13d35e" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.168656 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5nwnb" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.178483 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3512-account-create-update-44zf5" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.178613 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3512-account-create-update-44zf5" event={"ID":"0405ca10-f433-4290-a19b-5bb83028e6ae","Type":"ContainerDied","Data":"8846c2eefe8fe814ff39ffb38f6274a13c6b0b8e4bbfd8f5213820d1151e055b"} Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.178668 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8846c2eefe8fe814ff39ffb38f6274a13c6b0b8e4bbfd8f5213820d1151e055b" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.185751 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zc5qg" event={"ID":"d41a2f87-dbe3-4248-80d3-70df130c9a2d","Type":"ContainerDied","Data":"5b37372053e6556c31fc826abeb31fe88a1c1d8bb8af0efa8076ddc548d6a54b"} Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.185796 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b37372053e6556c31fc826abeb31fe88a1c1d8bb8af0efa8076ddc548d6a54b" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.185795 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zc5qg" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.235972 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h827b\" (UniqueName: \"kubernetes.io/projected/bd1bc99f-ba99-439c-b71b-9652c34f6248-kube-api-access-h827b\") pod \"bd1bc99f-ba99-439c-b71b-9652c34f6248\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236060 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d41a2f87-dbe3-4248-80d3-70df130c9a2d-operator-scripts\") pod \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236093 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87gth\" (UniqueName: \"kubernetes.io/projected/ba0a8bed-92bc-406b-b79a-f922b405c505-kube-api-access-87gth\") pod \"ba0a8bed-92bc-406b-b79a-f922b405c505\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236110 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba0a8bed-92bc-406b-b79a-f922b405c505-operator-scripts\") pod \"ba0a8bed-92bc-406b-b79a-f922b405c505\" (UID: \"ba0a8bed-92bc-406b-b79a-f922b405c505\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236128 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z6bw\" (UniqueName: \"kubernetes.io/projected/008e23fd-2d25-4f4f-bf2e-441c840521e4-kube-api-access-6z6bw\") pod \"008e23fd-2d25-4f4f-bf2e-441c840521e4\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236161 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405ca10-f433-4290-a19b-5bb83028e6ae-operator-scripts\") pod \"0405ca10-f433-4290-a19b-5bb83028e6ae\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236198 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-operator-scripts\") pod \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236224 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008e23fd-2d25-4f4f-bf2e-441c840521e4-operator-scripts\") pod \"008e23fd-2d25-4f4f-bf2e-441c840521e4\" (UID: \"008e23fd-2d25-4f4f-bf2e-441c840521e4\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236255 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gfq4\" (UniqueName: \"kubernetes.io/projected/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-kube-api-access-4gfq4\") pod \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\" (UID: \"bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236288 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd1bc99f-ba99-439c-b71b-9652c34f6248-operator-scripts\") pod \"bd1bc99f-ba99-439c-b71b-9652c34f6248\" (UID: \"bd1bc99f-ba99-439c-b71b-9652c34f6248\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236336 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72nc2\" (UniqueName: \"kubernetes.io/projected/d41a2f87-dbe3-4248-80d3-70df130c9a2d-kube-api-access-72nc2\") pod \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\" (UID: \"d41a2f87-dbe3-4248-80d3-70df130c9a2d\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236361 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6h9h\" (UniqueName: \"kubernetes.io/projected/0405ca10-f433-4290-a19b-5bb83028e6ae-kube-api-access-l6h9h\") pod \"0405ca10-f433-4290-a19b-5bb83028e6ae\" (UID: \"0405ca10-f433-4290-a19b-5bb83028e6ae\") " Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.236954 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0405ca10-f433-4290-a19b-5bb83028e6ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0405ca10-f433-4290-a19b-5bb83028e6ae" (UID: "0405ca10-f433-4290-a19b-5bb83028e6ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.237982 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49" (UID: "bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.238072 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405ca10-f433-4290-a19b-5bb83028e6ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.238345 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008e23fd-2d25-4f4f-bf2e-441c840521e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "008e23fd-2d25-4f4f-bf2e-441c840521e4" (UID: "008e23fd-2d25-4f4f-bf2e-441c840521e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.238388 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1bc99f-ba99-439c-b71b-9652c34f6248-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd1bc99f-ba99-439c-b71b-9652c34f6248" (UID: "bd1bc99f-ba99-439c-b71b-9652c34f6248"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.238921 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0a8bed-92bc-406b-b79a-f922b405c505-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba0a8bed-92bc-406b-b79a-f922b405c505" (UID: "ba0a8bed-92bc-406b-b79a-f922b405c505"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.238937 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d41a2f87-dbe3-4248-80d3-70df130c9a2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d41a2f87-dbe3-4248-80d3-70df130c9a2d" (UID: "d41a2f87-dbe3-4248-80d3-70df130c9a2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.241269 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-kube-api-access-4gfq4" (OuterVolumeSpecName: "kube-api-access-4gfq4") pod "bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49" (UID: "bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49"). InnerVolumeSpecName "kube-api-access-4gfq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.241319 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1bc99f-ba99-439c-b71b-9652c34f6248-kube-api-access-h827b" (OuterVolumeSpecName: "kube-api-access-h827b") pod "bd1bc99f-ba99-439c-b71b-9652c34f6248" (UID: "bd1bc99f-ba99-439c-b71b-9652c34f6248"). InnerVolumeSpecName "kube-api-access-h827b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.245489 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d41a2f87-dbe3-4248-80d3-70df130c9a2d-kube-api-access-72nc2" (OuterVolumeSpecName: "kube-api-access-72nc2") pod "d41a2f87-dbe3-4248-80d3-70df130c9a2d" (UID: "d41a2f87-dbe3-4248-80d3-70df130c9a2d"). InnerVolumeSpecName "kube-api-access-72nc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.245782 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008e23fd-2d25-4f4f-bf2e-441c840521e4-kube-api-access-6z6bw" (OuterVolumeSpecName: "kube-api-access-6z6bw") pod "008e23fd-2d25-4f4f-bf2e-441c840521e4" (UID: "008e23fd-2d25-4f4f-bf2e-441c840521e4"). InnerVolumeSpecName "kube-api-access-6z6bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.246181 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0a8bed-92bc-406b-b79a-f922b405c505-kube-api-access-87gth" (OuterVolumeSpecName: "kube-api-access-87gth") pod "ba0a8bed-92bc-406b-b79a-f922b405c505" (UID: "ba0a8bed-92bc-406b-b79a-f922b405c505"). InnerVolumeSpecName "kube-api-access-87gth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.246192 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0405ca10-f433-4290-a19b-5bb83028e6ae-kube-api-access-l6h9h" (OuterVolumeSpecName: "kube-api-access-l6h9h") pod "0405ca10-f433-4290-a19b-5bb83028e6ae" (UID: "0405ca10-f433-4290-a19b-5bb83028e6ae"). InnerVolumeSpecName "kube-api-access-l6h9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340521 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h827b\" (UniqueName: \"kubernetes.io/projected/bd1bc99f-ba99-439c-b71b-9652c34f6248-kube-api-access-h827b\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340561 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d41a2f87-dbe3-4248-80d3-70df130c9a2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340576 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87gth\" (UniqueName: \"kubernetes.io/projected/ba0a8bed-92bc-406b-b79a-f922b405c505-kube-api-access-87gth\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340588 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba0a8bed-92bc-406b-b79a-f922b405c505-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340599 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z6bw\" (UniqueName: \"kubernetes.io/projected/008e23fd-2d25-4f4f-bf2e-441c840521e4-kube-api-access-6z6bw\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340611 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340622 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008e23fd-2d25-4f4f-bf2e-441c840521e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340633 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gfq4\" (UniqueName: \"kubernetes.io/projected/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49-kube-api-access-4gfq4\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340644 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd1bc99f-ba99-439c-b71b-9652c34f6248-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340655 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72nc2\" (UniqueName: \"kubernetes.io/projected/d41a2f87-dbe3-4248-80d3-70df130c9a2d-kube-api-access-72nc2\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:35 crc kubenswrapper[5031]: I0129 08:58:35.340665 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6h9h\" (UniqueName: \"kubernetes.io/projected/0405ca10-f433-4290-a19b-5bb83028e6ae-kube-api-access-l6h9h\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:36 crc kubenswrapper[5031]: I0129 08:58:36.196434 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9cqv6" event={"ID":"b73ab584-3221-45b8-bc6b-d979c88e8454","Type":"ContainerStarted","Data":"8892212ccb2a90e581f8442c663c443cfc8a28dcb5877c5e9b5696e6aae795aa"} Jan 29 08:58:36 crc kubenswrapper[5031]: I0129 08:58:36.203196 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k647w" event={"ID":"085d265b-4cdb-44ae-8a06-fa3962a5546b","Type":"ContainerStarted","Data":"8543e2110daee4bd7cd5c6c9a2366083953514f8a21b2ba08a92c7630d527ddc"} Jan 29 08:58:36 crc kubenswrapper[5031]: I0129 08:58:36.224045 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9cqv6" podStartSLOduration=2.131603622 podStartE2EDuration="37.224025853s" podCreationTimestamp="2026-01-29 08:57:59 +0000 UTC" firstStartedPulling="2026-01-29 08:57:59.810823121 +0000 UTC m=+1160.310411073" lastFinishedPulling="2026-01-29 08:58:34.903245352 +0000 UTC m=+1195.402833304" observedRunningTime="2026-01-29 08:58:36.220263781 +0000 UTC m=+1196.719851733" watchObservedRunningTime="2026-01-29 08:58:36.224025853 +0000 UTC m=+1196.723613815" Jan 29 08:58:36 crc kubenswrapper[5031]: I0129 08:58:36.241495 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-k647w" podStartSLOduration=3.308862358 podStartE2EDuration="10.241474371s" podCreationTimestamp="2026-01-29 08:58:26 +0000 UTC" firstStartedPulling="2026-01-29 08:58:27.97700652 +0000 UTC m=+1188.476594472" lastFinishedPulling="2026-01-29 08:58:34.909618533 +0000 UTC m=+1195.409206485" observedRunningTime="2026-01-29 08:58:36.238747888 +0000 UTC m=+1196.738335840" watchObservedRunningTime="2026-01-29 08:58:36.241474371 +0000 UTC m=+1196.741062353" Jan 29 08:58:40 crc kubenswrapper[5031]: I0129 08:58:40.253844 5031 generic.go:334] "Generic (PLEG): container finished" podID="085d265b-4cdb-44ae-8a06-fa3962a5546b" containerID="8543e2110daee4bd7cd5c6c9a2366083953514f8a21b2ba08a92c7630d527ddc" exitCode=0 Jan 29 08:58:40 crc kubenswrapper[5031]: I0129 08:58:40.253933 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k647w" event={"ID":"085d265b-4cdb-44ae-8a06-fa3962a5546b","Type":"ContainerDied","Data":"8543e2110daee4bd7cd5c6c9a2366083953514f8a21b2ba08a92c7630d527ddc"} Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.576990 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.653532 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-config-data\") pod \"085d265b-4cdb-44ae-8a06-fa3962a5546b\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.653618 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkwth\" (UniqueName: \"kubernetes.io/projected/085d265b-4cdb-44ae-8a06-fa3962a5546b-kube-api-access-kkwth\") pod \"085d265b-4cdb-44ae-8a06-fa3962a5546b\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.653689 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-combined-ca-bundle\") pod \"085d265b-4cdb-44ae-8a06-fa3962a5546b\" (UID: \"085d265b-4cdb-44ae-8a06-fa3962a5546b\") " Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.659623 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085d265b-4cdb-44ae-8a06-fa3962a5546b-kube-api-access-kkwth" (OuterVolumeSpecName: "kube-api-access-kkwth") pod "085d265b-4cdb-44ae-8a06-fa3962a5546b" (UID: "085d265b-4cdb-44ae-8a06-fa3962a5546b"). InnerVolumeSpecName "kube-api-access-kkwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.679980 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "085d265b-4cdb-44ae-8a06-fa3962a5546b" (UID: "085d265b-4cdb-44ae-8a06-fa3962a5546b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.708605 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-config-data" (OuterVolumeSpecName: "config-data") pod "085d265b-4cdb-44ae-8a06-fa3962a5546b" (UID: "085d265b-4cdb-44ae-8a06-fa3962a5546b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.755194 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.755230 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkwth\" (UniqueName: \"kubernetes.io/projected/085d265b-4cdb-44ae-8a06-fa3962a5546b-kube-api-access-kkwth\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:41 crc kubenswrapper[5031]: I0129 08:58:41.755245 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085d265b-4cdb-44ae-8a06-fa3962a5546b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.275699 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k647w" event={"ID":"085d265b-4cdb-44ae-8a06-fa3962a5546b","Type":"ContainerDied","Data":"8940c19921e276b967cf5c05a728c3851def46149b72ec7a1a312a9327d38a6d"} Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.275748 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8940c19921e276b967cf5c05a728c3851def46149b72ec7a1a312a9327d38a6d" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.275765 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k647w" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.489624 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-rv72b"] Jan 29 08:58:42 crc kubenswrapper[5031]: E0129 08:58:42.490245 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1bc99f-ba99-439c-b71b-9652c34f6248" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.490310 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1bc99f-ba99-439c-b71b-9652c34f6248" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: E0129 08:58:42.491589 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008e23fd-2d25-4f4f-bf2e-441c840521e4" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.491714 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="008e23fd-2d25-4f4f-bf2e-441c840521e4" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: E0129 08:58:42.491802 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41a2f87-dbe3-4248-80d3-70df130c9a2d" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.491883 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41a2f87-dbe3-4248-80d3-70df130c9a2d" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: E0129 08:58:42.491966 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085d265b-4cdb-44ae-8a06-fa3962a5546b" containerName="keystone-db-sync" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492026 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="085d265b-4cdb-44ae-8a06-fa3962a5546b" containerName="keystone-db-sync" Jan 29 08:58:42 crc kubenswrapper[5031]: E0129 08:58:42.492082 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492159 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: E0129 08:58:42.492244 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0405ca10-f433-4290-a19b-5bb83028e6ae" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492305 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0405ca10-f433-4290-a19b-5bb83028e6ae" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: E0129 08:58:42.492385 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0a8bed-92bc-406b-b79a-f922b405c505" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492526 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0a8bed-92bc-406b-b79a-f922b405c505" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492767 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba0a8bed-92bc-406b-b79a-f922b405c505" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492839 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1bc99f-ba99-439c-b71b-9652c34f6248" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492905 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0405ca10-f433-4290-a19b-5bb83028e6ae" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.492984 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49" containerName="mariadb-account-create-update" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.493046 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41a2f87-dbe3-4248-80d3-70df130c9a2d" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.493136 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="085d265b-4cdb-44ae-8a06-fa3962a5546b" containerName="keystone-db-sync" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.493215 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="008e23fd-2d25-4f4f-bf2e-441c840521e4" containerName="mariadb-database-create" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.494206 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.506504 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-rv72b"] Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.546427 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-q8xzb"] Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.553831 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.559829 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.560183 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.560640 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.560833 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4dbn2" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.561032 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.566970 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q8xzb"] Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.736034 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.738008 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739504 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-config\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739544 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739564 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8sn\" (UniqueName: \"kubernetes.io/projected/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-kube-api-access-xr8sn\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739586 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzg9f\" (UniqueName: \"kubernetes.io/projected/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-kube-api-access-xzg9f\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739611 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739629 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-config-data\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739644 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-credential-keys\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739691 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739708 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-fernet-keys\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739724 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-combined-ca-bundle\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.739760 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-scripts\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.740413 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:58:42 crc kubenswrapper[5031]: I0129 08:58:42.740694 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.301738 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.305433 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-scripts\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.305516 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8sn\" (UniqueName: \"kubernetes.io/projected/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-kube-api-access-xr8sn\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.305567 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzg9f\" (UniqueName: \"kubernetes.io/projected/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-kube-api-access-xzg9f\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.305635 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.305665 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.305702 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-config-data\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.305995 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-credential-keys\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306080 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-config-data\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306127 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-log-httpd\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306196 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306216 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-fernet-keys\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306241 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-combined-ca-bundle\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306418 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-run-httpd\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306490 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-scripts\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306536 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306607 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csw2s\" (UniqueName: \"kubernetes.io/projected/8a6627fe-c450-4d80-ace6-085f7811d3b5-kube-api-access-csw2s\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.306673 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-config\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.312624 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-config\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.313492 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.313978 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-combined-ca-bundle\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.315553 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.316070 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.321198 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-fernet-keys\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.321881 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-scripts\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.326207 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-credential-keys\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.328811 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-config-data\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.334885 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzg9f\" (UniqueName: \"kubernetes.io/projected/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-kube-api-access-xzg9f\") pod \"dnsmasq-dns-66fbd85b65-rv72b\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.338347 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8sn\" (UniqueName: \"kubernetes.io/projected/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-kube-api-access-xr8sn\") pod \"keystone-bootstrap-q8xzb\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.341920 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.362906 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kmdhl"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.364395 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.370706 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.371177 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-44vxx" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.372596 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tkg9p"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.375993 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.380084 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.380876 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.381459 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r8c77" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.384880 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kmdhl"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.393120 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.398027 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xg72z"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.398977 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.403152 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.403313 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9qwcc" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.403273 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.409274 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.413909 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-config-data\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.413962 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-log-httpd\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.414060 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-run-httpd\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.414101 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.414128 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csw2s\" (UniqueName: \"kubernetes.io/projected/8a6627fe-c450-4d80-ace6-085f7811d3b5-kube-api-access-csw2s\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.414166 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-scripts\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.414209 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.415108 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-run-httpd\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.416206 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-log-httpd\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.418222 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.425825 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-scripts\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.427129 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tkg9p"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.432756 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-config-data\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.435294 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.440098 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csw2s\" (UniqueName: \"kubernetes.io/projected/8a6627fe-c450-4d80-ace6-085f7811d3b5-kube-api-access-csw2s\") pod \"ceilometer-0\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.444776 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-rv72b"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.463001 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hrprs"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.464488 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.466477 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5868q" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.467353 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.468239 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.484468 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-2lrpv"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.486592 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.492555 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hrprs"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.517681 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-combined-ca-bundle\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.518088 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm82t\" (UniqueName: \"kubernetes.io/projected/8ea99f3b-a67a-4077-aba2-d6a5910779f3-kube-api-access-fm82t\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.518214 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-db-sync-config-data\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.518518 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-combined-ca-bundle\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.518734 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-scripts\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.518851 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-db-sync-config-data\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.518947 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-combined-ca-bundle\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.519054 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4fk\" (UniqueName: \"kubernetes.io/projected/997a6082-d87d-4954-b383-9b27e161be4e-kube-api-access-bm4fk\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.519178 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-config\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.519292 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8zs\" (UniqueName: \"kubernetes.io/projected/66e96bab-0ee6-41af-9223-9f510ad5bbec-kube-api-access-zg8zs\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.519415 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/997a6082-d87d-4954-b383-9b27e161be4e-etc-machine-id\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.519542 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-config-data\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.521722 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xg72z"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.544556 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.549433 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-2lrpv"] Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.620766 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm82t\" (UniqueName: \"kubernetes.io/projected/8ea99f3b-a67a-4077-aba2-d6a5910779f3-kube-api-access-fm82t\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631254 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-combined-ca-bundle\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631348 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-db-sync-config-data\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631394 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-combined-ca-bundle\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631444 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631472 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631525 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-scripts\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631572 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-logs\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631602 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631628 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-config\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631702 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzlkw\" (UniqueName: \"kubernetes.io/projected/9998f77a-c972-45a5-9239-46fb7a98c8d8-kube-api-access-lzlkw\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631760 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-scripts\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631827 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-db-sync-config-data\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631868 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-combined-ca-bundle\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631900 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm4fk\" (UniqueName: \"kubernetes.io/projected/997a6082-d87d-4954-b383-9b27e161be4e-kube-api-access-bm4fk\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631936 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-config-data\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631955 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-config\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.631994 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw625\" (UniqueName: \"kubernetes.io/projected/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-kube-api-access-sw625\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.632021 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8zs\" (UniqueName: \"kubernetes.io/projected/66e96bab-0ee6-41af-9223-9f510ad5bbec-kube-api-access-zg8zs\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.632050 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/997a6082-d87d-4954-b383-9b27e161be4e-etc-machine-id\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.632093 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-config-data\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.632143 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-combined-ca-bundle\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.634757 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/997a6082-d87d-4954-b383-9b27e161be4e-etc-machine-id\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.661120 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-combined-ca-bundle\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.669250 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-combined-ca-bundle\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.674058 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-db-sync-config-data\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.675343 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-scripts\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.675843 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-combined-ca-bundle\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.676550 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-config-data\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.676781 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8zs\" (UniqueName: \"kubernetes.io/projected/66e96bab-0ee6-41af-9223-9f510ad5bbec-kube-api-access-zg8zs\") pod \"barbican-db-sync-kmdhl\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.677082 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-config\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.685412 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm4fk\" (UniqueName: \"kubernetes.io/projected/997a6082-d87d-4954-b383-9b27e161be4e-kube-api-access-bm4fk\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.686249 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm82t\" (UniqueName: \"kubernetes.io/projected/8ea99f3b-a67a-4077-aba2-d6a5910779f3-kube-api-access-fm82t\") pod \"neutron-db-sync-tkg9p\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.689274 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-db-sync-config-data\") pod \"cinder-db-sync-xg72z\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743376 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743422 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743452 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-scripts\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743476 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-logs\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743498 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743516 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-config\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743545 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzlkw\" (UniqueName: \"kubernetes.io/projected/9998f77a-c972-45a5-9239-46fb7a98c8d8-kube-api-access-lzlkw\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743593 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-config-data\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743616 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw625\" (UniqueName: \"kubernetes.io/projected/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-kube-api-access-sw625\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.743665 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-combined-ca-bundle\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.746259 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.746857 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.747443 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.752946 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-config\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.753925 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-logs\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.761520 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-combined-ca-bundle\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.763989 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-scripts\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.770239 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-config-data\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.778659 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw625\" (UniqueName: \"kubernetes.io/projected/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-kube-api-access-sw625\") pod \"placement-db-sync-hrprs\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.781614 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzlkw\" (UniqueName: \"kubernetes.io/projected/9998f77a-c972-45a5-9239-46fb7a98c8d8-kube-api-access-lzlkw\") pod \"dnsmasq-dns-6bf59f66bf-2lrpv\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.870229 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.893492 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.921718 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xg72z" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.945035 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hrprs" Jan 29 08:58:43 crc kubenswrapper[5031]: I0129 08:58:43.987477 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.279048 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q8xzb"] Jan 29 08:58:44 crc kubenswrapper[5031]: W0129 08:58:44.296636 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bd467a1_d0d0_4ca7_9195_ab8bc418fc7d.slice/crio-6c2946fdbbdef7fa6d41916151848bd68d67c61b5973b0355db0e31129765750 WatchSource:0}: Error finding container 6c2946fdbbdef7fa6d41916151848bd68d67c61b5973b0355db0e31129765750: Status 404 returned error can't find the container with id 6c2946fdbbdef7fa6d41916151848bd68d67c61b5973b0355db0e31129765750 Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.336656 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8xzb" event={"ID":"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d","Type":"ContainerStarted","Data":"6c2946fdbbdef7fa6d41916151848bd68d67c61b5973b0355db0e31129765750"} Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.398291 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-rv72b"] Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.572884 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.730891 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kmdhl"] Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.743289 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xg72z"] Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.751201 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tkg9p"] Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.916275 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-2lrpv"] Jan 29 08:58:44 crc kubenswrapper[5031]: I0129 08:58:44.926803 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hrprs"] Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.330241 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.376810 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hrprs" event={"ID":"64d57cae-e6a0-4e2d-8509-a19fa68fcf25","Type":"ContainerStarted","Data":"b866d85a41917ecec2c5d5c470f056123052ad2302bdd4964702ba469ed8aa66"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.381524 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmdhl" event={"ID":"66e96bab-0ee6-41af-9223-9f510ad5bbec","Type":"ContainerStarted","Data":"4e79a2b1c6a72622902af9526a935186704bba25280c6d9f8c69313ec163d5ca"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.407521 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tkg9p" event={"ID":"8ea99f3b-a67a-4077-aba2-d6a5910779f3","Type":"ContainerStarted","Data":"30e9f3e3171c34b71e8c911f29972049f0e8bddfa4d21a0bc56e048277caa0a7"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.407567 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tkg9p" event={"ID":"8ea99f3b-a67a-4077-aba2-d6a5910779f3","Type":"ContainerStarted","Data":"c911929989a2570cd827772341340e02dabbcafdc3bde2eda1667123f23c74bd"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.440009 5031 generic.go:334] "Generic (PLEG): container finished" podID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerID="7bffc0030bf8640c917cc0d1849ac6359b2c723f253d7898a9166cb5b869ffb1" exitCode=0 Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.440102 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" event={"ID":"9998f77a-c972-45a5-9239-46fb7a98c8d8","Type":"ContainerDied","Data":"7bffc0030bf8640c917cc0d1849ac6359b2c723f253d7898a9166cb5b869ffb1"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.440129 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" event={"ID":"9998f77a-c972-45a5-9239-46fb7a98c8d8","Type":"ContainerStarted","Data":"1a1872696c09998c8a3a3c4a3086260691f450f528feac40821eb78b6e2220e0"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.471234 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tkg9p" podStartSLOduration=3.471209243 podStartE2EDuration="3.471209243s" podCreationTimestamp="2026-01-29 08:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:58:45.451285058 +0000 UTC m=+1205.950873010" watchObservedRunningTime="2026-01-29 08:58:45.471209243 +0000 UTC m=+1205.970797195" Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.484053 5031 generic.go:334] "Generic (PLEG): container finished" podID="5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" containerID="f2576a2ae19c56e11c65d1b607ecfde812f441a937abecace43022acb460e4dc" exitCode=0 Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.484125 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" event={"ID":"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8","Type":"ContainerDied","Data":"f2576a2ae19c56e11c65d1b607ecfde812f441a937abecace43022acb460e4dc"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.484229 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" event={"ID":"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8","Type":"ContainerStarted","Data":"3e3ea99d9aff6e74d4fbb79a7ac15bcb5dc790cc24aeec0b31f972436f854f03"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.493764 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xg72z" event={"ID":"997a6082-d87d-4954-b383-9b27e161be4e","Type":"ContainerStarted","Data":"42bfc308da7d9c2d7f5c49bf6f0dc7a6bb9655115009dcc4b3ea5ac962689585"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.498213 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerStarted","Data":"ce55193983fd24c8f4040fff7e43acf6c5f03a885d681dec5fef77ec13239ef6"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.506096 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8xzb" event={"ID":"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d","Type":"ContainerStarted","Data":"1e7b40902b272dbf69bd78c3a0692143594320eed9dd6b309b99d45b6068a6aa"} Jan 29 08:58:45 crc kubenswrapper[5031]: I0129 08:58:45.547738 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-q8xzb" podStartSLOduration=3.547715948 podStartE2EDuration="3.547715948s" podCreationTimestamp="2026-01-29 08:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:58:45.545934031 +0000 UTC m=+1206.045521993" watchObservedRunningTime="2026-01-29 08:58:45.547715948 +0000 UTC m=+1206.047303900" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.028954 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.144243 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-nb\") pod \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.144295 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-dns-svc\") pod \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.144497 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-config\") pod \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.144574 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-sb\") pod \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.144613 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzg9f\" (UniqueName: \"kubernetes.io/projected/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-kube-api-access-xzg9f\") pod \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\" (UID: \"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8\") " Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.164703 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-kube-api-access-xzg9f" (OuterVolumeSpecName: "kube-api-access-xzg9f") pod "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" (UID: "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8"). InnerVolumeSpecName "kube-api-access-xzg9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.179929 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" (UID: "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.186986 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-config" (OuterVolumeSpecName: "config") pod "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" (UID: "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.205723 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" (UID: "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.205963 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" (UID: "5a8fa5d4-f63a-4629-9c49-cb75e9396cc8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.247693 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.247732 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.247745 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzg9f\" (UniqueName: \"kubernetes.io/projected/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-kube-api-access-xzg9f\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.247758 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.247772 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.531154 5031 generic.go:334] "Generic (PLEG): container finished" podID="b73ab584-3221-45b8-bc6b-d979c88e8454" containerID="8892212ccb2a90e581f8442c663c443cfc8a28dcb5877c5e9b5696e6aae795aa" exitCode=0 Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.531225 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9cqv6" event={"ID":"b73ab584-3221-45b8-bc6b-d979c88e8454","Type":"ContainerDied","Data":"8892212ccb2a90e581f8442c663c443cfc8a28dcb5877c5e9b5696e6aae795aa"} Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.538058 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" event={"ID":"9998f77a-c972-45a5-9239-46fb7a98c8d8","Type":"ContainerStarted","Data":"a6dedd0900ad48063682ff4f271a4cc13e45982cf411309eee858a934589d68d"} Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.540408 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.546485 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" event={"ID":"5a8fa5d4-f63a-4629-9c49-cb75e9396cc8","Type":"ContainerDied","Data":"3e3ea99d9aff6e74d4fbb79a7ac15bcb5dc790cc24aeec0b31f972436f854f03"} Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.546560 5031 scope.go:117] "RemoveContainer" containerID="f2576a2ae19c56e11c65d1b607ecfde812f441a937abecace43022acb460e4dc" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.546816 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-rv72b" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.586476 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" podStartSLOduration=4.586453073 podStartE2EDuration="4.586453073s" podCreationTimestamp="2026-01-29 08:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:58:46.584551542 +0000 UTC m=+1207.084139504" watchObservedRunningTime="2026-01-29 08:58:46.586453073 +0000 UTC m=+1207.086041025" Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.655321 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-rv72b"] Jan 29 08:58:46 crc kubenswrapper[5031]: I0129 08:58:46.676816 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-rv72b"] Jan 29 08:58:48 crc kubenswrapper[5031]: I0129 08:58:48.297083 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" path="/var/lib/kubelet/pods/5a8fa5d4-f63a-4629-9c49-cb75e9396cc8/volumes" Jan 29 08:58:49 crc kubenswrapper[5031]: I0129 08:58:49.592437 5031 generic.go:334] "Generic (PLEG): container finished" podID="9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" containerID="1e7b40902b272dbf69bd78c3a0692143594320eed9dd6b309b99d45b6068a6aa" exitCode=0 Jan 29 08:58:49 crc kubenswrapper[5031]: I0129 08:58:49.592707 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8xzb" event={"ID":"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d","Type":"ContainerDied","Data":"1e7b40902b272dbf69bd78c3a0692143594320eed9dd6b309b99d45b6068a6aa"} Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.122052 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9cqv6" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.145134 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-config-data\") pod \"b73ab584-3221-45b8-bc6b-d979c88e8454\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.145260 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-db-sync-config-data\") pod \"b73ab584-3221-45b8-bc6b-d979c88e8454\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.145294 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-combined-ca-bundle\") pod \"b73ab584-3221-45b8-bc6b-d979c88e8454\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.145447 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztnv8\" (UniqueName: \"kubernetes.io/projected/b73ab584-3221-45b8-bc6b-d979c88e8454-kube-api-access-ztnv8\") pod \"b73ab584-3221-45b8-bc6b-d979c88e8454\" (UID: \"b73ab584-3221-45b8-bc6b-d979c88e8454\") " Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.154967 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73ab584-3221-45b8-bc6b-d979c88e8454-kube-api-access-ztnv8" (OuterVolumeSpecName: "kube-api-access-ztnv8") pod "b73ab584-3221-45b8-bc6b-d979c88e8454" (UID: "b73ab584-3221-45b8-bc6b-d979c88e8454"). InnerVolumeSpecName "kube-api-access-ztnv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.170611 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b73ab584-3221-45b8-bc6b-d979c88e8454" (UID: "b73ab584-3221-45b8-bc6b-d979c88e8454"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.191714 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b73ab584-3221-45b8-bc6b-d979c88e8454" (UID: "b73ab584-3221-45b8-bc6b-d979c88e8454"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.206178 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-config-data" (OuterVolumeSpecName: "config-data") pod "b73ab584-3221-45b8-bc6b-d979c88e8454" (UID: "b73ab584-3221-45b8-bc6b-d979c88e8454"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.247894 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztnv8\" (UniqueName: \"kubernetes.io/projected/b73ab584-3221-45b8-bc6b-d979c88e8454-kube-api-access-ztnv8\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.247931 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.247956 5031 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.247967 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ab584-3221-45b8-bc6b-d979c88e8454-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.610917 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9cqv6" Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.611083 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9cqv6" event={"ID":"b73ab584-3221-45b8-bc6b-d979c88e8454","Type":"ContainerDied","Data":"8326d41136193dee04587a73f540c78e45ad43aba1e36d1eb1eff1dcaa0147e9"} Jan 29 08:58:50 crc kubenswrapper[5031]: I0129 08:58:50.611416 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8326d41136193dee04587a73f540c78e45ad43aba1e36d1eb1eff1dcaa0147e9" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.625436 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-2lrpv"] Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.625703 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="dnsmasq-dns" containerID="cri-o://a6dedd0900ad48063682ff4f271a4cc13e45982cf411309eee858a934589d68d" gracePeriod=10 Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.640596 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.707469 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc"] Jan 29 08:58:51 crc kubenswrapper[5031]: E0129 08:58:51.707865 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" containerName="init" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.707883 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" containerName="init" Jan 29 08:58:51 crc kubenswrapper[5031]: E0129 08:58:51.707909 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73ab584-3221-45b8-bc6b-d979c88e8454" containerName="glance-db-sync" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.707916 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73ab584-3221-45b8-bc6b-d979c88e8454" containerName="glance-db-sync" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.708067 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73ab584-3221-45b8-bc6b-d979c88e8454" containerName="glance-db-sync" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.708094 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a8fa5d4-f63a-4629-9c49-cb75e9396cc8" containerName="init" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.708927 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.752236 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc"] Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.887475 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.887588 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.887620 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-config\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.887777 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.888067 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j9p\" (UniqueName: \"kubernetes.io/projected/45e0cab1-c52b-4641-a557-76529aa23670-kube-api-access-l7j9p\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.990782 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.990948 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7j9p\" (UniqueName: \"kubernetes.io/projected/45e0cab1-c52b-4641-a557-76529aa23670-kube-api-access-l7j9p\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.991001 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.991048 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.991076 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-config\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.991788 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.992416 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.993029 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:51 crc kubenswrapper[5031]: I0129 08:58:51.995234 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-config\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:52 crc kubenswrapper[5031]: I0129 08:58:52.022087 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7j9p\" (UniqueName: \"kubernetes.io/projected/45e0cab1-c52b-4641-a557-76529aa23670-kube-api-access-l7j9p\") pod \"dnsmasq-dns-5b6dbdb6f5-vnvdc\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:52 crc kubenswrapper[5031]: I0129 08:58:52.039001 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:58:52 crc kubenswrapper[5031]: I0129 08:58:52.657193 5031 generic.go:334] "Generic (PLEG): container finished" podID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerID="a6dedd0900ad48063682ff4f271a4cc13e45982cf411309eee858a934589d68d" exitCode=0 Jan 29 08:58:52 crc kubenswrapper[5031]: I0129 08:58:52.657269 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" event={"ID":"9998f77a-c972-45a5-9239-46fb7a98c8d8","Type":"ContainerDied","Data":"a6dedd0900ad48063682ff4f271a4cc13e45982cf411309eee858a934589d68d"} Jan 29 08:58:53 crc kubenswrapper[5031]: I0129 08:58:53.988866 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.588294 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.691817 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-fernet-keys\") pod \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.691992 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-config-data\") pod \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.692024 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-credential-keys\") pod \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.692054 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-combined-ca-bundle\") pod \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.692117 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-scripts\") pod \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.692148 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr8sn\" (UniqueName: \"kubernetes.io/projected/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-kube-api-access-xr8sn\") pod \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\" (UID: \"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d\") " Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.698808 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" (UID: "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.700842 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-kube-api-access-xr8sn" (OuterVolumeSpecName: "kube-api-access-xr8sn") pod "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" (UID: "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d"). InnerVolumeSpecName "kube-api-access-xr8sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.703455 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-scripts" (OuterVolumeSpecName: "scripts") pod "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" (UID: "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.706208 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" (UID: "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.719660 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8xzb" event={"ID":"9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d","Type":"ContainerDied","Data":"6c2946fdbbdef7fa6d41916151848bd68d67c61b5973b0355db0e31129765750"} Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.719708 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c2946fdbbdef7fa6d41916151848bd68d67c61b5973b0355db0e31129765750" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.719746 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8xzb" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.740432 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" (UID: "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.746897 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-config-data" (OuterVolumeSpecName: "config-data") pod "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" (UID: "9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.795707 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.796081 5031 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.796115 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.796125 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.796133 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr8sn\" (UniqueName: \"kubernetes.io/projected/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-kube-api-access-xr8sn\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:56 crc kubenswrapper[5031]: I0129 08:58:56.796143 5031 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.761254 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-q8xzb"] Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.768462 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-q8xzb"] Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.853160 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8q6zp"] Jan 29 08:58:57 crc kubenswrapper[5031]: E0129 08:58:57.853575 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" containerName="keystone-bootstrap" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.853592 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" containerName="keystone-bootstrap" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.853761 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" containerName="keystone-bootstrap" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.854297 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.858197 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4dbn2" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.859843 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.860028 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.860213 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.866883 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.868519 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8q6zp"] Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.917704 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-scripts\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.917777 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-fernet-keys\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.917824 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-credential-keys\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.917840 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfddj\" (UniqueName: \"kubernetes.io/projected/328b04fe-0ab5-45ab-8c94-239a7221575a-kube-api-access-mfddj\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.917870 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-config-data\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:57 crc kubenswrapper[5031]: I0129 08:58:57.917952 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-combined-ca-bundle\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.019010 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-combined-ca-bundle\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.019094 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-scripts\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.019116 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-fernet-keys\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.019153 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-credential-keys\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.019171 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfddj\" (UniqueName: \"kubernetes.io/projected/328b04fe-0ab5-45ab-8c94-239a7221575a-kube-api-access-mfddj\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.019192 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-config-data\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.025419 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-credential-keys\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.036026 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-fernet-keys\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.045318 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-scripts\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.046611 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfddj\" (UniqueName: \"kubernetes.io/projected/328b04fe-0ab5-45ab-8c94-239a7221575a-kube-api-access-mfddj\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.050214 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-config-data\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.051176 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-combined-ca-bundle\") pod \"keystone-bootstrap-8q6zp\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.173524 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:58:58 crc kubenswrapper[5031]: I0129 08:58:58.293880 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d" path="/var/lib/kubelet/pods/9bd467a1-d0d0-4ca7-9195-ab8bc418fc7d/volumes" Jan 29 08:59:03 crc kubenswrapper[5031]: I0129 08:59:03.989361 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 29 08:59:05 crc kubenswrapper[5031]: E0129 08:59:05.505737 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 08:59:05 crc kubenswrapper[5031]: E0129 08:59:05.506192 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg8zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-kmdhl_openstack(66e96bab-0ee6-41af-9223-9f510ad5bbec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:05 crc kubenswrapper[5031]: E0129 08:59:05.509582 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-kmdhl" podUID="66e96bab-0ee6-41af-9223-9f510ad5bbec" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.597909 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.684635 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-sb\") pod \"9998f77a-c972-45a5-9239-46fb7a98c8d8\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.684716 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzlkw\" (UniqueName: \"kubernetes.io/projected/9998f77a-c972-45a5-9239-46fb7a98c8d8-kube-api-access-lzlkw\") pod \"9998f77a-c972-45a5-9239-46fb7a98c8d8\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.684764 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-dns-svc\") pod \"9998f77a-c972-45a5-9239-46fb7a98c8d8\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.684792 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-nb\") pod \"9998f77a-c972-45a5-9239-46fb7a98c8d8\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.684992 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-config\") pod \"9998f77a-c972-45a5-9239-46fb7a98c8d8\" (UID: \"9998f77a-c972-45a5-9239-46fb7a98c8d8\") " Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.700354 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9998f77a-c972-45a5-9239-46fb7a98c8d8-kube-api-access-lzlkw" (OuterVolumeSpecName: "kube-api-access-lzlkw") pod "9998f77a-c972-45a5-9239-46fb7a98c8d8" (UID: "9998f77a-c972-45a5-9239-46fb7a98c8d8"). InnerVolumeSpecName "kube-api-access-lzlkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.732822 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9998f77a-c972-45a5-9239-46fb7a98c8d8" (UID: "9998f77a-c972-45a5-9239-46fb7a98c8d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.734058 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9998f77a-c972-45a5-9239-46fb7a98c8d8" (UID: "9998f77a-c972-45a5-9239-46fb7a98c8d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.736189 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9998f77a-c972-45a5-9239-46fb7a98c8d8" (UID: "9998f77a-c972-45a5-9239-46fb7a98c8d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.743911 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-config" (OuterVolumeSpecName: "config") pod "9998f77a-c972-45a5-9239-46fb7a98c8d8" (UID: "9998f77a-c972-45a5-9239-46fb7a98c8d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.801208 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzlkw\" (UniqueName: \"kubernetes.io/projected/9998f77a-c972-45a5-9239-46fb7a98c8d8-kube-api-access-lzlkw\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.801252 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.801387 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.801401 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.801410 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998f77a-c972-45a5-9239-46fb7a98c8d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.818561 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.819083 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" event={"ID":"9998f77a-c972-45a5-9239-46fb7a98c8d8","Type":"ContainerDied","Data":"1a1872696c09998c8a3a3c4a3086260691f450f528feac40821eb78b6e2220e0"} Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.819150 5031 scope.go:117] "RemoveContainer" containerID="a6dedd0900ad48063682ff4f271a4cc13e45982cf411309eee858a934589d68d" Jan 29 08:59:05 crc kubenswrapper[5031]: E0129 08:59:05.819839 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-kmdhl" podUID="66e96bab-0ee6-41af-9223-9f510ad5bbec" Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.865065 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-2lrpv"] Jan 29 08:59:05 crc kubenswrapper[5031]: I0129 08:59:05.873816 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-2lrpv"] Jan 29 08:59:06 crc kubenswrapper[5031]: I0129 08:59:06.293032 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" path="/var/lib/kubelet/pods/9998f77a-c972-45a5-9239-46fb7a98c8d8/volumes" Jan 29 08:59:06 crc kubenswrapper[5031]: I0129 08:59:06.907602 5031 scope.go:117] "RemoveContainer" containerID="7bffc0030bf8640c917cc0d1849ac6359b2c723f253d7898a9166cb5b869ffb1" Jan 29 08:59:06 crc kubenswrapper[5031]: E0129 08:59:06.913988 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 08:59:06 crc kubenswrapper[5031]: E0129 08:59:06.914156 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm4fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xg72z_openstack(997a6082-d87d-4954-b383-9b27e161be4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:06 crc kubenswrapper[5031]: E0129 08:59:06.915489 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xg72z" podUID="997a6082-d87d-4954-b383-9b27e161be4e" Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.354671 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc"] Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.436894 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8q6zp"] Jan 29 08:59:07 crc kubenswrapper[5031]: W0129 08:59:07.448454 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod328b04fe_0ab5_45ab_8c94_239a7221575a.slice/crio-a23413321bee9f88075b710a13071f158b4e091eafb9ae31082c40187dbb2c47 WatchSource:0}: Error finding container a23413321bee9f88075b710a13071f158b4e091eafb9ae31082c40187dbb2c47: Status 404 returned error can't find the container with id a23413321bee9f88075b710a13071f158b4e091eafb9ae31082c40187dbb2c47 Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.843429 5031 generic.go:334] "Generic (PLEG): container finished" podID="45e0cab1-c52b-4641-a557-76529aa23670" containerID="3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2" exitCode=0 Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.843743 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" event={"ID":"45e0cab1-c52b-4641-a557-76529aa23670","Type":"ContainerDied","Data":"3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2"} Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.843777 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" event={"ID":"45e0cab1-c52b-4641-a557-76529aa23670","Type":"ContainerStarted","Data":"c3156a2ed449db3fd7ab4c05c0ae0d149d4ba42660481fb36a0d681a965b04e8"} Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.859903 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8q6zp" event={"ID":"328b04fe-0ab5-45ab-8c94-239a7221575a","Type":"ContainerStarted","Data":"56732bde36bf049c1a0eab3361754098e731940fcc5a4fb8dcdf6eb536847818"} Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.859976 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8q6zp" event={"ID":"328b04fe-0ab5-45ab-8c94-239a7221575a","Type":"ContainerStarted","Data":"a23413321bee9f88075b710a13071f158b4e091eafb9ae31082c40187dbb2c47"} Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.862060 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerStarted","Data":"413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94"} Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.865528 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hrprs" event={"ID":"64d57cae-e6a0-4e2d-8509-a19fa68fcf25","Type":"ContainerStarted","Data":"3b44d7401a20ee2f9ed558fc808863f113f097d62a1e4060d09a1879a34e9272"} Jan 29 08:59:07 crc kubenswrapper[5031]: E0129 08:59:07.868890 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-xg72z" podUID="997a6082-d87d-4954-b383-9b27e161be4e" Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.930651 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8q6zp" podStartSLOduration=10.930631088 podStartE2EDuration="10.930631088s" podCreationTimestamp="2026-01-29 08:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:07.921049421 +0000 UTC m=+1228.420637383" watchObservedRunningTime="2026-01-29 08:59:07.930631088 +0000 UTC m=+1228.430219040" Jan 29 08:59:07 crc kubenswrapper[5031]: I0129 08:59:07.974887 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hrprs" podStartSLOduration=4.023625033 podStartE2EDuration="25.974868207s" podCreationTimestamp="2026-01-29 08:58:42 +0000 UTC" firstStartedPulling="2026-01-29 08:58:44.933706824 +0000 UTC m=+1205.433294776" lastFinishedPulling="2026-01-29 08:59:06.884949998 +0000 UTC m=+1227.384537950" observedRunningTime="2026-01-29 08:59:07.938100639 +0000 UTC m=+1228.437688581" watchObservedRunningTime="2026-01-29 08:59:07.974868207 +0000 UTC m=+1228.474456179" Jan 29 08:59:08 crc kubenswrapper[5031]: I0129 08:59:08.874617 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" event={"ID":"45e0cab1-c52b-4641-a557-76529aa23670","Type":"ContainerStarted","Data":"74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd"} Jan 29 08:59:08 crc kubenswrapper[5031]: I0129 08:59:08.875019 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:59:08 crc kubenswrapper[5031]: I0129 08:59:08.877757 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerStarted","Data":"7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae"} Jan 29 08:59:08 crc kubenswrapper[5031]: I0129 08:59:08.900775 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" podStartSLOduration=17.900754469 podStartE2EDuration="17.900754469s" podCreationTimestamp="2026-01-29 08:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:08.895836137 +0000 UTC m=+1229.395424119" watchObservedRunningTime="2026-01-29 08:59:08.900754469 +0000 UTC m=+1229.400342421" Jan 29 08:59:08 crc kubenswrapper[5031]: I0129 08:59:08.991403 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bf59f66bf-2lrpv" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 29 08:59:09 crc kubenswrapper[5031]: I0129 08:59:09.945815 5031 generic.go:334] "Generic (PLEG): container finished" podID="64d57cae-e6a0-4e2d-8509-a19fa68fcf25" containerID="3b44d7401a20ee2f9ed558fc808863f113f097d62a1e4060d09a1879a34e9272" exitCode=0 Jan 29 08:59:09 crc kubenswrapper[5031]: I0129 08:59:09.946834 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hrprs" event={"ID":"64d57cae-e6a0-4e2d-8509-a19fa68fcf25","Type":"ContainerDied","Data":"3b44d7401a20ee2f9ed558fc808863f113f097d62a1e4060d09a1879a34e9272"} Jan 29 08:59:11 crc kubenswrapper[5031]: I0129 08:59:11.968274 5031 generic.go:334] "Generic (PLEG): container finished" podID="328b04fe-0ab5-45ab-8c94-239a7221575a" containerID="56732bde36bf049c1a0eab3361754098e731940fcc5a4fb8dcdf6eb536847818" exitCode=0 Jan 29 08:59:11 crc kubenswrapper[5031]: I0129 08:59:11.968433 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8q6zp" event={"ID":"328b04fe-0ab5-45ab-8c94-239a7221575a","Type":"ContainerDied","Data":"56732bde36bf049c1a0eab3361754098e731940fcc5a4fb8dcdf6eb536847818"} Jan 29 08:59:11 crc kubenswrapper[5031]: I0129 08:59:11.972054 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hrprs" event={"ID":"64d57cae-e6a0-4e2d-8509-a19fa68fcf25","Type":"ContainerDied","Data":"b866d85a41917ecec2c5d5c470f056123052ad2302bdd4964702ba469ed8aa66"} Jan 29 08:59:11 crc kubenswrapper[5031]: I0129 08:59:11.972094 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b866d85a41917ecec2c5d5c470f056123052ad2302bdd4964702ba469ed8aa66" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.037023 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hrprs" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.138121 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-scripts\") pod \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.138274 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw625\" (UniqueName: \"kubernetes.io/projected/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-kube-api-access-sw625\") pod \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.138407 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-combined-ca-bundle\") pod \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.138450 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-logs\") pod \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.138477 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-config-data\") pod \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\" (UID: \"64d57cae-e6a0-4e2d-8509-a19fa68fcf25\") " Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.139319 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-logs" (OuterVolumeSpecName: "logs") pod "64d57cae-e6a0-4e2d-8509-a19fa68fcf25" (UID: "64d57cae-e6a0-4e2d-8509-a19fa68fcf25"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.144414 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-scripts" (OuterVolumeSpecName: "scripts") pod "64d57cae-e6a0-4e2d-8509-a19fa68fcf25" (UID: "64d57cae-e6a0-4e2d-8509-a19fa68fcf25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.163219 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-config-data" (OuterVolumeSpecName: "config-data") pod "64d57cae-e6a0-4e2d-8509-a19fa68fcf25" (UID: "64d57cae-e6a0-4e2d-8509-a19fa68fcf25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.164137 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-kube-api-access-sw625" (OuterVolumeSpecName: "kube-api-access-sw625") pod "64d57cae-e6a0-4e2d-8509-a19fa68fcf25" (UID: "64d57cae-e6a0-4e2d-8509-a19fa68fcf25"). InnerVolumeSpecName "kube-api-access-sw625". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.167925 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64d57cae-e6a0-4e2d-8509-a19fa68fcf25" (UID: "64d57cae-e6a0-4e2d-8509-a19fa68fcf25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.240793 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.240821 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw625\" (UniqueName: \"kubernetes.io/projected/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-kube-api-access-sw625\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.240833 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.240842 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.240851 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d57cae-e6a0-4e2d-8509-a19fa68fcf25-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.981481 5031 generic.go:334] "Generic (PLEG): container finished" podID="8ea99f3b-a67a-4077-aba2-d6a5910779f3" containerID="30e9f3e3171c34b71e8c911f29972049f0e8bddfa4d21a0bc56e048277caa0a7" exitCode=0 Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.982007 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tkg9p" event={"ID":"8ea99f3b-a67a-4077-aba2-d6a5910779f3","Type":"ContainerDied","Data":"30e9f3e3171c34b71e8c911f29972049f0e8bddfa4d21a0bc56e048277caa0a7"} Jan 29 08:59:12 crc kubenswrapper[5031]: I0129 08:59:12.982293 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hrprs" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.191970 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-97c68858b-9q587"] Jan 29 08:59:13 crc kubenswrapper[5031]: E0129 08:59:13.192329 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="init" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.192394 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="init" Jan 29 08:59:13 crc kubenswrapper[5031]: E0129 08:59:13.192431 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="dnsmasq-dns" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.192439 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="dnsmasq-dns" Jan 29 08:59:13 crc kubenswrapper[5031]: E0129 08:59:13.192448 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d57cae-e6a0-4e2d-8509-a19fa68fcf25" containerName="placement-db-sync" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.192456 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d57cae-e6a0-4e2d-8509-a19fa68fcf25" containerName="placement-db-sync" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.192627 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d57cae-e6a0-4e2d-8509-a19fa68fcf25" containerName="placement-db-sync" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.192641 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9998f77a-c972-45a5-9239-46fb7a98c8d8" containerName="dnsmasq-dns" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.193933 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.197254 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.197307 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.197623 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.197668 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.197985 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5868q" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.212236 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-97c68858b-9q587"] Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.257134 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfj2w\" (UniqueName: \"kubernetes.io/projected/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-kube-api-access-qfj2w\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.257215 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-logs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.257523 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-public-tls-certs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.257606 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-config-data\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.257717 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-combined-ca-bundle\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.257799 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-scripts\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.257824 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-internal-tls-certs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.359165 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-combined-ca-bundle\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.359233 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-scripts\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.359258 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-internal-tls-certs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.359300 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfj2w\" (UniqueName: \"kubernetes.io/projected/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-kube-api-access-qfj2w\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.359344 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-logs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.359881 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-logs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.360499 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-public-tls-certs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.360557 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-config-data\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.364035 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-internal-tls-certs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.367663 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-scripts\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.367921 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-public-tls-certs\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.368079 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-config-data\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.369813 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-combined-ca-bundle\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.376645 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfj2w\" (UniqueName: \"kubernetes.io/projected/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-kube-api-access-qfj2w\") pod \"placement-97c68858b-9q587\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.531348 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.711835 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.767553 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfddj\" (UniqueName: \"kubernetes.io/projected/328b04fe-0ab5-45ab-8c94-239a7221575a-kube-api-access-mfddj\") pod \"328b04fe-0ab5-45ab-8c94-239a7221575a\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.767602 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-config-data\") pod \"328b04fe-0ab5-45ab-8c94-239a7221575a\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.767621 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-credential-keys\") pod \"328b04fe-0ab5-45ab-8c94-239a7221575a\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.767709 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-scripts\") pod \"328b04fe-0ab5-45ab-8c94-239a7221575a\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.767834 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-combined-ca-bundle\") pod \"328b04fe-0ab5-45ab-8c94-239a7221575a\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.767854 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-fernet-keys\") pod \"328b04fe-0ab5-45ab-8c94-239a7221575a\" (UID: \"328b04fe-0ab5-45ab-8c94-239a7221575a\") " Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.775151 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "328b04fe-0ab5-45ab-8c94-239a7221575a" (UID: "328b04fe-0ab5-45ab-8c94-239a7221575a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.775187 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-scripts" (OuterVolumeSpecName: "scripts") pod "328b04fe-0ab5-45ab-8c94-239a7221575a" (UID: "328b04fe-0ab5-45ab-8c94-239a7221575a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.775261 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328b04fe-0ab5-45ab-8c94-239a7221575a-kube-api-access-mfddj" (OuterVolumeSpecName: "kube-api-access-mfddj") pod "328b04fe-0ab5-45ab-8c94-239a7221575a" (UID: "328b04fe-0ab5-45ab-8c94-239a7221575a"). InnerVolumeSpecName "kube-api-access-mfddj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.775271 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "328b04fe-0ab5-45ab-8c94-239a7221575a" (UID: "328b04fe-0ab5-45ab-8c94-239a7221575a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.799621 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-config-data" (OuterVolumeSpecName: "config-data") pod "328b04fe-0ab5-45ab-8c94-239a7221575a" (UID: "328b04fe-0ab5-45ab-8c94-239a7221575a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.809489 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "328b04fe-0ab5-45ab-8c94-239a7221575a" (UID: "328b04fe-0ab5-45ab-8c94-239a7221575a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.869812 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfddj\" (UniqueName: \"kubernetes.io/projected/328b04fe-0ab5-45ab-8c94-239a7221575a-kube-api-access-mfddj\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.870125 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.870210 5031 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.870275 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.870340 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.870480 5031 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/328b04fe-0ab5-45ab-8c94-239a7221575a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.991216 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8q6zp" event={"ID":"328b04fe-0ab5-45ab-8c94-239a7221575a","Type":"ContainerDied","Data":"a23413321bee9f88075b710a13071f158b4e091eafb9ae31082c40187dbb2c47"} Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.992291 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23413321bee9f88075b710a13071f158b4e091eafb9ae31082c40187dbb2c47" Jan 29 08:59:13 crc kubenswrapper[5031]: I0129 08:59:13.991492 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8q6zp" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.013252 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerStarted","Data":"73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d"} Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.028473 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-97c68858b-9q587"] Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.113051 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6b6fcb467b-dc5s8"] Jan 29 08:59:14 crc kubenswrapper[5031]: E0129 08:59:14.114136 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328b04fe-0ab5-45ab-8c94-239a7221575a" containerName="keystone-bootstrap" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.114162 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="328b04fe-0ab5-45ab-8c94-239a7221575a" containerName="keystone-bootstrap" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.114614 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="328b04fe-0ab5-45ab-8c94-239a7221575a" containerName="keystone-bootstrap" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.115204 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.118408 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.118746 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.120237 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.121554 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.121681 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4dbn2" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.121878 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.134243 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b6fcb467b-dc5s8"] Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.175855 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-combined-ca-bundle\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.175904 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-internal-tls-certs\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.175962 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-fernet-keys\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.176085 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-config-data\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.176147 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-credential-keys\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.176173 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-scripts\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.176202 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-public-tls-certs\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.176395 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2dsj\" (UniqueName: \"kubernetes.io/projected/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-kube-api-access-n2dsj\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.277759 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-credential-keys\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.277790 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-scripts\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.277822 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-public-tls-certs\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.277892 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2dsj\" (UniqueName: \"kubernetes.io/projected/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-kube-api-access-n2dsj\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.277941 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-combined-ca-bundle\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.277966 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-internal-tls-certs\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.277998 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-fernet-keys\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.278040 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-config-data\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.281929 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-config-data\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.282420 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-scripts\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.285410 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-fernet-keys\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.285557 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-credential-keys\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.285762 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-public-tls-certs\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.286055 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-internal-tls-certs\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.286170 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-combined-ca-bundle\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.296513 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2dsj\" (UniqueName: \"kubernetes.io/projected/11cb22e9-f3f2-4a42-804c-aaa47ca31a16-kube-api-access-n2dsj\") pod \"keystone-6b6fcb467b-dc5s8\" (UID: \"11cb22e9-f3f2-4a42-804c-aaa47ca31a16\") " pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.432872 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.530876 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.582161 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-config\") pod \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.582210 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm82t\" (UniqueName: \"kubernetes.io/projected/8ea99f3b-a67a-4077-aba2-d6a5910779f3-kube-api-access-fm82t\") pod \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.582304 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-combined-ca-bundle\") pod \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\" (UID: \"8ea99f3b-a67a-4077-aba2-d6a5910779f3\") " Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.592530 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea99f3b-a67a-4077-aba2-d6a5910779f3-kube-api-access-fm82t" (OuterVolumeSpecName: "kube-api-access-fm82t") pod "8ea99f3b-a67a-4077-aba2-d6a5910779f3" (UID: "8ea99f3b-a67a-4077-aba2-d6a5910779f3"). InnerVolumeSpecName "kube-api-access-fm82t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.616447 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ea99f3b-a67a-4077-aba2-d6a5910779f3" (UID: "8ea99f3b-a67a-4077-aba2-d6a5910779f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:14 crc kubenswrapper[5031]: I0129 08:59:14.620597 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-config" (OuterVolumeSpecName: "config") pod "8ea99f3b-a67a-4077-aba2-d6a5910779f3" (UID: "8ea99f3b-a67a-4077-aba2-d6a5910779f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:14.684536 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:14.684572 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm82t\" (UniqueName: \"kubernetes.io/projected/8ea99f3b-a67a-4077-aba2-d6a5910779f3-kube-api-access-fm82t\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:14.684585 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea99f3b-a67a-4077-aba2-d6a5910779f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:14.873138 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b6fcb467b-dc5s8"] Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.023522 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tkg9p" event={"ID":"8ea99f3b-a67a-4077-aba2-d6a5910779f3","Type":"ContainerDied","Data":"c911929989a2570cd827772341340e02dabbcafdc3bde2eda1667123f23c74bd"} Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.023556 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c911929989a2570cd827772341340e02dabbcafdc3bde2eda1667123f23c74bd" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.023613 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tkg9p" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.026079 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-97c68858b-9q587" event={"ID":"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9","Type":"ContainerStarted","Data":"14d76079584a5062e530f31b119d8ff265ab554fc478242705e5abba2fec2a30"} Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.026099 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-97c68858b-9q587" event={"ID":"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9","Type":"ContainerStarted","Data":"9887758ff5d01c7a37bbae159f96c09381d7fef5fa405cabf927f23ebeb86ccb"} Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.026110 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-97c68858b-9q587" event={"ID":"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9","Type":"ContainerStarted","Data":"f9be6528fa23d8a4c7af6e0a46b35a2d896c0d67e9250f3230e288f29753ccb1"} Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.026205 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.026217 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.026889 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b6fcb467b-dc5s8" event={"ID":"11cb22e9-f3f2-4a42-804c-aaa47ca31a16","Type":"ContainerStarted","Data":"e2b45b921671b69e91f8015f141c20b18e9d7db744e3e80ab728b29e5dace453"} Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.084532 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-97c68858b-9q587" podStartSLOduration=2.084502683 podStartE2EDuration="2.084502683s" podCreationTimestamp="2026-01-29 08:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:15.049320148 +0000 UTC m=+1235.548908120" watchObservedRunningTime="2026-01-29 08:59:15.084502683 +0000 UTC m=+1235.584090655" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.319242 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc"] Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.319466 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" podUID="45e0cab1-c52b-4641-a557-76529aa23670" containerName="dnsmasq-dns" containerID="cri-o://74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd" gracePeriod=10 Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.326020 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.353403 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-4zmh2"] Jan 29 08:59:15 crc kubenswrapper[5031]: E0129 08:59:15.353752 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea99f3b-a67a-4077-aba2-d6a5910779f3" containerName="neutron-db-sync" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.353806 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea99f3b-a67a-4077-aba2-d6a5910779f3" containerName="neutron-db-sync" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.353963 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea99f3b-a67a-4077-aba2-d6a5910779f3" containerName="neutron-db-sync" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.354843 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.389119 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-4zmh2"] Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.405958 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.406051 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.406221 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.407074 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-config\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.407221 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4h88\" (UniqueName: \"kubernetes.io/projected/ea6129e9-5206-488e-85f5-2ffccb4dd28b-kube-api-access-r4h88\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.505520 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55cd8fc46d-6fxwk"] Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.514885 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.518965 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4h88\" (UniqueName: \"kubernetes.io/projected/ea6129e9-5206-488e-85f5-2ffccb4dd28b-kube-api-access-r4h88\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519055 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519080 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519111 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519152 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-config\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519216 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519425 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519660 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.519672 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r8c77" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.520282 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-config\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.520475 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.521824 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.523177 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.545006 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4h88\" (UniqueName: \"kubernetes.io/projected/ea6129e9-5206-488e-85f5-2ffccb4dd28b-kube-api-access-r4h88\") pod \"dnsmasq-dns-5f66db59b9-4zmh2\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.549480 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55cd8fc46d-6fxwk"] Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.628550 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-combined-ca-bundle\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.628963 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-httpd-config\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.629034 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-config\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.629150 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5rq\" (UniqueName: \"kubernetes.io/projected/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-kube-api-access-6z5rq\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.629204 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-ovndb-tls-certs\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.730810 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-httpd-config\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.730964 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-config\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.731016 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5rq\" (UniqueName: \"kubernetes.io/projected/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-kube-api-access-6z5rq\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.731038 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-ovndb-tls-certs\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.731146 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-combined-ca-bundle\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.743439 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-httpd-config\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.745019 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-ovndb-tls-certs\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.745504 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-combined-ca-bundle\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.750145 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-config\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.754510 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5rq\" (UniqueName: \"kubernetes.io/projected/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-kube-api-access-6z5rq\") pod \"neutron-55cd8fc46d-6fxwk\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.793891 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.884269 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.919852 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.940945 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7j9p\" (UniqueName: \"kubernetes.io/projected/45e0cab1-c52b-4641-a557-76529aa23670-kube-api-access-l7j9p\") pod \"45e0cab1-c52b-4641-a557-76529aa23670\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.941024 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-config\") pod \"45e0cab1-c52b-4641-a557-76529aa23670\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.941120 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-sb\") pod \"45e0cab1-c52b-4641-a557-76529aa23670\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.941217 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-nb\") pod \"45e0cab1-c52b-4641-a557-76529aa23670\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.941256 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-dns-svc\") pod \"45e0cab1-c52b-4641-a557-76529aa23670\" (UID: \"45e0cab1-c52b-4641-a557-76529aa23670\") " Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.946093 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e0cab1-c52b-4641-a557-76529aa23670-kube-api-access-l7j9p" (OuterVolumeSpecName: "kube-api-access-l7j9p") pod "45e0cab1-c52b-4641-a557-76529aa23670" (UID: "45e0cab1-c52b-4641-a557-76529aa23670"). InnerVolumeSpecName "kube-api-access-l7j9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:15 crc kubenswrapper[5031]: I0129 08:59:15.991261 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45e0cab1-c52b-4641-a557-76529aa23670" (UID: "45e0cab1-c52b-4641-a557-76529aa23670"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.008094 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45e0cab1-c52b-4641-a557-76529aa23670" (UID: "45e0cab1-c52b-4641-a557-76529aa23670"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.019806 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-config" (OuterVolumeSpecName: "config") pod "45e0cab1-c52b-4641-a557-76529aa23670" (UID: "45e0cab1-c52b-4641-a557-76529aa23670"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.036900 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45e0cab1-c52b-4641-a557-76529aa23670" (UID: "45e0cab1-c52b-4641-a557-76529aa23670"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.044299 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.044337 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.050459 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7j9p\" (UniqueName: \"kubernetes.io/projected/45e0cab1-c52b-4641-a557-76529aa23670-kube-api-access-l7j9p\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.050510 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.050521 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e0cab1-c52b-4641-a557-76529aa23670-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.050670 5031 generic.go:334] "Generic (PLEG): container finished" podID="45e0cab1-c52b-4641-a557-76529aa23670" containerID="74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd" exitCode=0 Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.050829 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.058160 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" event={"ID":"45e0cab1-c52b-4641-a557-76529aa23670","Type":"ContainerDied","Data":"74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd"} Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.058226 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc" event={"ID":"45e0cab1-c52b-4641-a557-76529aa23670","Type":"ContainerDied","Data":"c3156a2ed449db3fd7ab4c05c0ae0d149d4ba42660481fb36a0d681a965b04e8"} Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.058252 5031 scope.go:117] "RemoveContainer" containerID="74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.087698 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b6fcb467b-dc5s8" event={"ID":"11cb22e9-f3f2-4a42-804c-aaa47ca31a16","Type":"ContainerStarted","Data":"3adacaf293ddefd0e45208cda9b85b80f17ef5a07a9dbdb8208190768def4a81"} Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.087848 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.118155 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc"] Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.136534 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-vnvdc"] Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.141498 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6b6fcb467b-dc5s8" podStartSLOduration=2.141480368 podStartE2EDuration="2.141480368s" podCreationTimestamp="2026-01-29 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:16.128555981 +0000 UTC m=+1236.628143933" watchObservedRunningTime="2026-01-29 08:59:16.141480368 +0000 UTC m=+1236.641068320" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.141836 5031 scope.go:117] "RemoveContainer" containerID="3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.177468 5031 scope.go:117] "RemoveContainer" containerID="74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd" Jan 29 08:59:16 crc kubenswrapper[5031]: E0129 08:59:16.188211 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd\": container with ID starting with 74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd not found: ID does not exist" containerID="74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.188272 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd"} err="failed to get container status \"74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd\": rpc error: code = NotFound desc = could not find container \"74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd\": container with ID starting with 74515e2d4087524997c4a5a2bf0164f7c416880f0eefcaf4f2a181c4976516dd not found: ID does not exist" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.188339 5031 scope.go:117] "RemoveContainer" containerID="3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2" Jan 29 08:59:16 crc kubenswrapper[5031]: E0129 08:59:16.189083 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2\": container with ID starting with 3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2 not found: ID does not exist" containerID="3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.189142 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2"} err="failed to get container status \"3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2\": rpc error: code = NotFound desc = could not find container \"3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2\": container with ID starting with 3750b7140e47c6f4996815cbc49fb537b5de10946e47a750b491f2ff84fc98d2 not found: ID does not exist" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.301626 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e0cab1-c52b-4641-a557-76529aa23670" path="/var/lib/kubelet/pods/45e0cab1-c52b-4641-a557-76529aa23670/volumes" Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.412253 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-4zmh2"] Jan 29 08:59:16 crc kubenswrapper[5031]: W0129 08:59:16.416237 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea6129e9_5206_488e_85f5_2ffccb4dd28b.slice/crio-9a01519f33afd5753c749dd84a5b463efde44d3f2a4051d3564ae77d7556f566 WatchSource:0}: Error finding container 9a01519f33afd5753c749dd84a5b463efde44d3f2a4051d3564ae77d7556f566: Status 404 returned error can't find the container with id 9a01519f33afd5753c749dd84a5b463efde44d3f2a4051d3564ae77d7556f566 Jan 29 08:59:16 crc kubenswrapper[5031]: I0129 08:59:16.660463 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55cd8fc46d-6fxwk"] Jan 29 08:59:16 crc kubenswrapper[5031]: W0129 08:59:16.671303 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf647f09_336d_4f0a_9cf7_415ecf4a9d26.slice/crio-81c185a93d47ef8b90c1987d97c1f297994fabd3c27037841313ca68d58b017d WatchSource:0}: Error finding container 81c185a93d47ef8b90c1987d97c1f297994fabd3c27037841313ca68d58b017d: Status 404 returned error can't find the container with id 81c185a93d47ef8b90c1987d97c1f297994fabd3c27037841313ca68d58b017d Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.123109 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cd8fc46d-6fxwk" event={"ID":"cf647f09-336d-4f0a-9cf7-415ecf4a9d26","Type":"ContainerStarted","Data":"54c71041e7c4927e77d0c3367148761d20f97f56f5bae9f1561ec53a539fb273"} Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.123161 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cd8fc46d-6fxwk" event={"ID":"cf647f09-336d-4f0a-9cf7-415ecf4a9d26","Type":"ContainerStarted","Data":"6c08c56a28d1cd5d115e430600a8f8a7cd7ef18bcd823b24c8c04ad9c67e6636"} Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.123173 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cd8fc46d-6fxwk" event={"ID":"cf647f09-336d-4f0a-9cf7-415ecf4a9d26","Type":"ContainerStarted","Data":"81c185a93d47ef8b90c1987d97c1f297994fabd3c27037841313ca68d58b017d"} Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.124290 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.146202 5031 generic.go:334] "Generic (PLEG): container finished" podID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerID="05263ab28ed137fbff86ffd33d166008d453c505e7f1ec75554c7b0b7cba2354" exitCode=0 Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.148742 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" event={"ID":"ea6129e9-5206-488e-85f5-2ffccb4dd28b","Type":"ContainerDied","Data":"05263ab28ed137fbff86ffd33d166008d453c505e7f1ec75554c7b0b7cba2354"} Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.148871 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" event={"ID":"ea6129e9-5206-488e-85f5-2ffccb4dd28b","Type":"ContainerStarted","Data":"9a01519f33afd5753c749dd84a5b463efde44d3f2a4051d3564ae77d7556f566"} Jan 29 08:59:17 crc kubenswrapper[5031]: I0129 08:59:17.151306 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55cd8fc46d-6fxwk" podStartSLOduration=2.151285468 podStartE2EDuration="2.151285468s" podCreationTimestamp="2026-01-29 08:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:17.143695284 +0000 UTC m=+1237.643283236" watchObservedRunningTime="2026-01-29 08:59:17.151285468 +0000 UTC m=+1237.650873440" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.086394 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-558dccb5cc-bkkrn"] Jan 29 08:59:18 crc kubenswrapper[5031]: E0129 08:59:18.087202 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0cab1-c52b-4641-a557-76529aa23670" containerName="dnsmasq-dns" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.087222 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0cab1-c52b-4641-a557-76529aa23670" containerName="dnsmasq-dns" Jan 29 08:59:18 crc kubenswrapper[5031]: E0129 08:59:18.087249 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0cab1-c52b-4641-a557-76529aa23670" containerName="init" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.087259 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0cab1-c52b-4641-a557-76529aa23670" containerName="init" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.087507 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e0cab1-c52b-4641-a557-76529aa23670" containerName="dnsmasq-dns" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.088769 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.091376 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.091571 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.107958 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-558dccb5cc-bkkrn"] Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.171506 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" event={"ID":"ea6129e9-5206-488e-85f5-2ffccb4dd28b","Type":"ContainerStarted","Data":"4c401296a56dd6d6cf6c2c94367ce8759f2fd6edb77e3ee72ea715409a1f89c8"} Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.171836 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.206046 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" podStartSLOduration=3.206028534 podStartE2EDuration="3.206028534s" podCreationTimestamp="2026-01-29 08:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:18.197752791 +0000 UTC m=+1238.697340753" watchObservedRunningTime="2026-01-29 08:59:18.206028534 +0000 UTC m=+1238.705616486" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.209123 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-config\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.209188 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-internal-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.209254 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-public-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.209322 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-httpd-config\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.209359 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-combined-ca-bundle\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.209432 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-ovndb-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.209455 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8444\" (UniqueName: \"kubernetes.io/projected/8b30d63e-6219-4832-868b-9a115b30f433-kube-api-access-w8444\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.314422 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-ovndb-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.314481 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8444\" (UniqueName: \"kubernetes.io/projected/8b30d63e-6219-4832-868b-9a115b30f433-kube-api-access-w8444\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.314580 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-config\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.314641 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-internal-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.314725 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-public-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.314812 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-httpd-config\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.314862 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-combined-ca-bundle\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.334255 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-ovndb-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.334355 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-combined-ca-bundle\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.335904 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-httpd-config\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.336321 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-internal-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.347423 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-public-tls-certs\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.360824 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b30d63e-6219-4832-868b-9a115b30f433-config\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.364191 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8444\" (UniqueName: \"kubernetes.io/projected/8b30d63e-6219-4832-868b-9a115b30f433-kube-api-access-w8444\") pod \"neutron-558dccb5cc-bkkrn\" (UID: \"8b30d63e-6219-4832-868b-9a115b30f433\") " pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.416514 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:18 crc kubenswrapper[5031]: I0129 08:59:18.996335 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-558dccb5cc-bkkrn"] Jan 29 08:59:19 crc kubenswrapper[5031]: I0129 08:59:19.187344 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmdhl" event={"ID":"66e96bab-0ee6-41af-9223-9f510ad5bbec","Type":"ContainerStarted","Data":"73284c02465262e1058676773bdcb3d0c26034d3fb1e649a2ac74546b11c46ed"} Jan 29 08:59:19 crc kubenswrapper[5031]: I0129 08:59:19.191982 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-558dccb5cc-bkkrn" event={"ID":"8b30d63e-6219-4832-868b-9a115b30f433","Type":"ContainerStarted","Data":"1766eb2bf51b1738be7b967a83300c2753e5ea64e9c4391b953feaca83bd97a1"} Jan 29 08:59:19 crc kubenswrapper[5031]: I0129 08:59:19.212144 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kmdhl" podStartSLOduration=3.295753968 podStartE2EDuration="37.212128434s" podCreationTimestamp="2026-01-29 08:58:42 +0000 UTC" firstStartedPulling="2026-01-29 08:58:44.820546605 +0000 UTC m=+1205.320134557" lastFinishedPulling="2026-01-29 08:59:18.736921071 +0000 UTC m=+1239.236509023" observedRunningTime="2026-01-29 08:59:19.209420602 +0000 UTC m=+1239.709008544" watchObservedRunningTime="2026-01-29 08:59:19.212128434 +0000 UTC m=+1239.711716386" Jan 29 08:59:20 crc kubenswrapper[5031]: I0129 08:59:20.202251 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-558dccb5cc-bkkrn" event={"ID":"8b30d63e-6219-4832-868b-9a115b30f433","Type":"ContainerStarted","Data":"65e908a65032f005bad4f660b172015ec44147393f797a1c794b1cc9d2c47d35"} Jan 29 08:59:20 crc kubenswrapper[5031]: I0129 08:59:20.202510 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:20 crc kubenswrapper[5031]: I0129 08:59:20.202524 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-558dccb5cc-bkkrn" event={"ID":"8b30d63e-6219-4832-868b-9a115b30f433","Type":"ContainerStarted","Data":"b1f2a794e91d857df1131c0d46426435099f5dab21f06099fcf4d57eb89a64de"} Jan 29 08:59:20 crc kubenswrapper[5031]: I0129 08:59:20.233753 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-558dccb5cc-bkkrn" podStartSLOduration=2.23373129 podStartE2EDuration="2.23373129s" podCreationTimestamp="2026-01-29 08:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:20.221086661 +0000 UTC m=+1240.720674613" watchObservedRunningTime="2026-01-29 08:59:20.23373129 +0000 UTC m=+1240.733319262" Jan 29 08:59:25 crc kubenswrapper[5031]: I0129 08:59:25.796329 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:25 crc kubenswrapper[5031]: I0129 08:59:25.875870 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hhbcg"] Jan 29 08:59:25 crc kubenswrapper[5031]: I0129 08:59:25.876155 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-hhbcg" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerName="dnsmasq-dns" containerID="cri-o://385c3349f858fd4c97dceb024fced3287400977035b20f1585f15a44f8dc3b5a" gracePeriod=10 Jan 29 08:59:26 crc kubenswrapper[5031]: I0129 08:59:26.258555 5031 generic.go:334] "Generic (PLEG): container finished" podID="66e96bab-0ee6-41af-9223-9f510ad5bbec" containerID="73284c02465262e1058676773bdcb3d0c26034d3fb1e649a2ac74546b11c46ed" exitCode=0 Jan 29 08:59:26 crc kubenswrapper[5031]: I0129 08:59:26.258642 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmdhl" event={"ID":"66e96bab-0ee6-41af-9223-9f510ad5bbec","Type":"ContainerDied","Data":"73284c02465262e1058676773bdcb3d0c26034d3fb1e649a2ac74546b11c46ed"} Jan 29 08:59:26 crc kubenswrapper[5031]: I0129 08:59:26.262022 5031 generic.go:334] "Generic (PLEG): container finished" podID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerID="385c3349f858fd4c97dceb024fced3287400977035b20f1585f15a44f8dc3b5a" exitCode=0 Jan 29 08:59:26 crc kubenswrapper[5031]: I0129 08:59:26.262077 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hhbcg" event={"ID":"f2cea483-4915-4fd9-8b38-e257ec143e34","Type":"ContainerDied","Data":"385c3349f858fd4c97dceb024fced3287400977035b20f1585f15a44f8dc3b5a"} Jan 29 08:59:26 crc kubenswrapper[5031]: I0129 08:59:26.431061 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-hhbcg" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.833384 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.838797 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.923998 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-nb\") pod \"f2cea483-4915-4fd9-8b38-e257ec143e34\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.924112 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-config\") pod \"f2cea483-4915-4fd9-8b38-e257ec143e34\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.924160 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-sb\") pod \"f2cea483-4915-4fd9-8b38-e257ec143e34\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.924216 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8zs\" (UniqueName: \"kubernetes.io/projected/66e96bab-0ee6-41af-9223-9f510ad5bbec-kube-api-access-zg8zs\") pod \"66e96bab-0ee6-41af-9223-9f510ad5bbec\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.925469 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px9bj\" (UniqueName: \"kubernetes.io/projected/f2cea483-4915-4fd9-8b38-e257ec143e34-kube-api-access-px9bj\") pod \"f2cea483-4915-4fd9-8b38-e257ec143e34\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.925522 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-dns-svc\") pod \"f2cea483-4915-4fd9-8b38-e257ec143e34\" (UID: \"f2cea483-4915-4fd9-8b38-e257ec143e34\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.925633 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-combined-ca-bundle\") pod \"66e96bab-0ee6-41af-9223-9f510ad5bbec\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.925688 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-db-sync-config-data\") pod \"66e96bab-0ee6-41af-9223-9f510ad5bbec\" (UID: \"66e96bab-0ee6-41af-9223-9f510ad5bbec\") " Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.934576 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e96bab-0ee6-41af-9223-9f510ad5bbec-kube-api-access-zg8zs" (OuterVolumeSpecName: "kube-api-access-zg8zs") pod "66e96bab-0ee6-41af-9223-9f510ad5bbec" (UID: "66e96bab-0ee6-41af-9223-9f510ad5bbec"). InnerVolumeSpecName "kube-api-access-zg8zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.935041 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2cea483-4915-4fd9-8b38-e257ec143e34-kube-api-access-px9bj" (OuterVolumeSpecName: "kube-api-access-px9bj") pod "f2cea483-4915-4fd9-8b38-e257ec143e34" (UID: "f2cea483-4915-4fd9-8b38-e257ec143e34"). InnerVolumeSpecName "kube-api-access-px9bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.944075 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "66e96bab-0ee6-41af-9223-9f510ad5bbec" (UID: "66e96bab-0ee6-41af-9223-9f510ad5bbec"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.975874 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66e96bab-0ee6-41af-9223-9f510ad5bbec" (UID: "66e96bab-0ee6-41af-9223-9f510ad5bbec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:27 crc kubenswrapper[5031]: I0129 08:59:27.996338 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2cea483-4915-4fd9-8b38-e257ec143e34" (UID: "f2cea483-4915-4fd9-8b38-e257ec143e34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.005394 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2cea483-4915-4fd9-8b38-e257ec143e34" (UID: "f2cea483-4915-4fd9-8b38-e257ec143e34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.024682 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-config" (OuterVolumeSpecName: "config") pod "f2cea483-4915-4fd9-8b38-e257ec143e34" (UID: "f2cea483-4915-4fd9-8b38-e257ec143e34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.025353 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2cea483-4915-4fd9-8b38-e257ec143e34" (UID: "f2cea483-4915-4fd9-8b38-e257ec143e34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.027901 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.027995 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.028071 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg8zs\" (UniqueName: \"kubernetes.io/projected/66e96bab-0ee6-41af-9223-9f510ad5bbec-kube-api-access-zg8zs\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.028171 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px9bj\" (UniqueName: \"kubernetes.io/projected/f2cea483-4915-4fd9-8b38-e257ec143e34-kube-api-access-px9bj\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.028283 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.028357 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.028446 5031 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66e96bab-0ee6-41af-9223-9f510ad5bbec-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.028528 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2cea483-4915-4fd9-8b38-e257ec143e34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.312881 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmdhl" event={"ID":"66e96bab-0ee6-41af-9223-9f510ad5bbec","Type":"ContainerDied","Data":"4e79a2b1c6a72622902af9526a935186704bba25280c6d9f8c69313ec163d5ca"} Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.312921 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e79a2b1c6a72622902af9526a935186704bba25280c6d9f8c69313ec163d5ca" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.313028 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmdhl" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.343753 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hhbcg" event={"ID":"f2cea483-4915-4fd9-8b38-e257ec143e34","Type":"ContainerDied","Data":"2eb87de28370612361ff37b3e2d1375639e3b3d29be6258909e5b111e57ad558"} Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.343808 5031 scope.go:117] "RemoveContainer" containerID="385c3349f858fd4c97dceb024fced3287400977035b20f1585f15a44f8dc3b5a" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.344126 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hhbcg" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.393921 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerStarted","Data":"7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430"} Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.394164 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-central-agent" containerID="cri-o://413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94" gracePeriod=30 Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.394508 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.394563 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="proxy-httpd" containerID="cri-o://7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430" gracePeriod=30 Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.394622 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="sg-core" containerID="cri-o://73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d" gracePeriod=30 Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.394705 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-notification-agent" containerID="cri-o://7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae" gracePeriod=30 Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.407717 5031 scope.go:117] "RemoveContainer" containerID="8fe0f7777770b4c1c59f187104be805eb404c082aff018c3f5d840910cdb4e2c" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.451107 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hhbcg"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.475481 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hhbcg"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.479503 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.44742533 podStartE2EDuration="46.479481028s" podCreationTimestamp="2026-01-29 08:58:42 +0000 UTC" firstStartedPulling="2026-01-29 08:58:44.592921479 +0000 UTC m=+1205.092509431" lastFinishedPulling="2026-01-29 08:59:27.624977177 +0000 UTC m=+1248.124565129" observedRunningTime="2026-01-29 08:59:28.451083047 +0000 UTC m=+1248.950670999" watchObservedRunningTime="2026-01-29 08:59:28.479481028 +0000 UTC m=+1248.979068980" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.634652 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-86875b9f7-r8mj8"] Jan 29 08:59:28 crc kubenswrapper[5031]: E0129 08:59:28.635345 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerName="dnsmasq-dns" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.635397 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerName="dnsmasq-dns" Jan 29 08:59:28 crc kubenswrapper[5031]: E0129 08:59:28.635421 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e96bab-0ee6-41af-9223-9f510ad5bbec" containerName="barbican-db-sync" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.635429 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e96bab-0ee6-41af-9223-9f510ad5bbec" containerName="barbican-db-sync" Jan 29 08:59:28 crc kubenswrapper[5031]: E0129 08:59:28.635447 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerName="init" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.635454 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerName="init" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.635668 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" containerName="dnsmasq-dns" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.635694 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e96bab-0ee6-41af-9223-9f510ad5bbec" containerName="barbican-db-sync" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.638278 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.645305 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.645595 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.648887 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-44vxx" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.665104 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86875b9f7-r8mj8"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.681447 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-685b68c5cb-gfkqk"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.683004 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.692801 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-685b68c5cb-gfkqk"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.700198 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.725084 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-tbgzf"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.726859 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.766657 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lxk\" (UniqueName: \"kubernetes.io/projected/2769fca4-758e-4f92-a514-a70ca7cb0b5a-kube-api-access-p4lxk\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.766717 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-combined-ca-bundle\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.766746 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2769fca4-758e-4f92-a514-a70ca7cb0b5a-logs\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.766791 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-config-data-custom\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.766830 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-config-data\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.767458 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-tbgzf"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.867923 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cc85969c8-jq8bn"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.869044 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-config-data\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.869113 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-config-data\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.869142 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-combined-ca-bundle\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.869172 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-config\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.869835 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.869925 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4lxk\" (UniqueName: \"kubernetes.io/projected/2769fca4-758e-4f92-a514-a70ca7cb0b5a-kube-api-access-p4lxk\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.869968 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/74ae4456-e53d-410e-931c-108d9b79177f-kube-api-access-jlv5x\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.870010 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-combined-ca-bundle\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.870034 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ae4456-e53d-410e-931c-108d9b79177f-logs\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.870067 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-config-data-custom\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.870096 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2769fca4-758e-4f92-a514-a70ca7cb0b5a-logs\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.870039 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.871151 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2769fca4-758e-4f92-a514-a70ca7cb0b5a-logs\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.871680 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.872134 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b54n\" (UniqueName: \"kubernetes.io/projected/4be37031-a33c-4ebf-977e-a463b2fe3762-kube-api-access-5b54n\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.872174 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-config-data-custom\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.872201 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-dns-svc\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.874376 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.887457 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-config-data\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.894787 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-combined-ca-bundle\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.896503 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2769fca4-758e-4f92-a514-a70ca7cb0b5a-config-data-custom\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.897731 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4lxk\" (UniqueName: \"kubernetes.io/projected/2769fca4-758e-4f92-a514-a70ca7cb0b5a-kube-api-access-p4lxk\") pod \"barbican-worker-86875b9f7-r8mj8\" (UID: \"2769fca4-758e-4f92-a514-a70ca7cb0b5a\") " pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.899801 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cc85969c8-jq8bn"] Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974050 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data-custom\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974100 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b54n\" (UniqueName: \"kubernetes.io/projected/4be37031-a33c-4ebf-977e-a463b2fe3762-kube-api-access-5b54n\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974122 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-dns-svc\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974150 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-config-data\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974172 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-combined-ca-bundle\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974193 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-config\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974209 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974248 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-combined-ca-bundle\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974268 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b00e3d5c-e648-43d7-a014-815c0dcff26f-logs\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974291 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/74ae4456-e53d-410e-931c-108d9b79177f-kube-api-access-jlv5x\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974308 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzkdj\" (UniqueName: \"kubernetes.io/projected/b00e3d5c-e648-43d7-a014-815c0dcff26f-kube-api-access-rzkdj\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974331 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ae4456-e53d-410e-931c-108d9b79177f-logs\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974348 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-config-data-custom\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974384 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.974408 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.975692 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-dns-svc\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.977864 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ae4456-e53d-410e-931c-108d9b79177f-logs\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.978955 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.979642 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-config\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.980470 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86875b9f7-r8mj8" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.980786 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-combined-ca-bundle\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.981328 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-config-data\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.981846 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.985573 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74ae4456-e53d-410e-931c-108d9b79177f-config-data-custom\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.997976 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b54n\" (UniqueName: \"kubernetes.io/projected/4be37031-a33c-4ebf-977e-a463b2fe3762-kube-api-access-5b54n\") pod \"dnsmasq-dns-869f779d85-tbgzf\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:28 crc kubenswrapper[5031]: I0129 08:59:28.998435 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/74ae4456-e53d-410e-931c-108d9b79177f-kube-api-access-jlv5x\") pod \"barbican-keystone-listener-685b68c5cb-gfkqk\" (UID: \"74ae4456-e53d-410e-931c-108d9b79177f\") " pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.022701 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.077202 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-combined-ca-bundle\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.078002 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b00e3d5c-e648-43d7-a014-815c0dcff26f-logs\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.078060 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzkdj\" (UniqueName: \"kubernetes.io/projected/b00e3d5c-e648-43d7-a014-815c0dcff26f-kube-api-access-rzkdj\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.078152 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.078214 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data-custom\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.080390 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b00e3d5c-e648-43d7-a014-815c0dcff26f-logs\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.082895 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-combined-ca-bundle\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.084434 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.086284 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data-custom\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.100514 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzkdj\" (UniqueName: \"kubernetes.io/projected/b00e3d5c-e648-43d7-a014-815c0dcff26f-kube-api-access-rzkdj\") pod \"barbican-api-7cc85969c8-jq8bn\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.111851 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.241638 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.428314 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xg72z" event={"ID":"997a6082-d87d-4954-b383-9b27e161be4e","Type":"ContainerStarted","Data":"1a3c80a7fac4c3bb26e4b14f65137c3093a803a958279eab49d5317379606b7d"} Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.439501 5031 generic.go:334] "Generic (PLEG): container finished" podID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerID="7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430" exitCode=0 Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.439531 5031 generic.go:334] "Generic (PLEG): container finished" podID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerID="73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d" exitCode=2 Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.439541 5031 generic.go:334] "Generic (PLEG): container finished" podID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerID="413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94" exitCode=0 Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.439592 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerDied","Data":"7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430"} Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.439666 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerDied","Data":"73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d"} Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.439680 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerDied","Data":"413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94"} Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.459671 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xg72z" podStartSLOduration=4.667245123 podStartE2EDuration="47.459651173s" podCreationTimestamp="2026-01-29 08:58:42 +0000 UTC" firstStartedPulling="2026-01-29 08:58:44.818473518 +0000 UTC m=+1205.318061470" lastFinishedPulling="2026-01-29 08:59:27.610879568 +0000 UTC m=+1248.110467520" observedRunningTime="2026-01-29 08:59:29.446225573 +0000 UTC m=+1249.945813535" watchObservedRunningTime="2026-01-29 08:59:29.459651173 +0000 UTC m=+1249.959239125" Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.603858 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86875b9f7-r8mj8"] Jan 29 08:59:29 crc kubenswrapper[5031]: W0129 08:59:29.606324 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2769fca4_758e_4f92_a514_a70ca7cb0b5a.slice/crio-1b49e56ca0849b6d93c1a8813a1ee3c2d8bc25b712603aacbc686a8d5b34f32d WatchSource:0}: Error finding container 1b49e56ca0849b6d93c1a8813a1ee3c2d8bc25b712603aacbc686a8d5b34f32d: Status 404 returned error can't find the container with id 1b49e56ca0849b6d93c1a8813a1ee3c2d8bc25b712603aacbc686a8d5b34f32d Jan 29 08:59:29 crc kubenswrapper[5031]: W0129 08:59:29.740396 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74ae4456_e53d_410e_931c_108d9b79177f.slice/crio-21dc1e90f18a774f9a5481300f2dd49fde51af04781101cd1fd2e29c50b5ab71 WatchSource:0}: Error finding container 21dc1e90f18a774f9a5481300f2dd49fde51af04781101cd1fd2e29c50b5ab71: Status 404 returned error can't find the container with id 21dc1e90f18a774f9a5481300f2dd49fde51af04781101cd1fd2e29c50b5ab71 Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.741754 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-685b68c5cb-gfkqk"] Jan 29 08:59:29 crc kubenswrapper[5031]: W0129 08:59:29.841662 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be37031_a33c_4ebf_977e_a463b2fe3762.slice/crio-d08c255e55bfcf22b7e860035f747abaae04964ffa8af3f19d1eec3847b7b0d9 WatchSource:0}: Error finding container d08c255e55bfcf22b7e860035f747abaae04964ffa8af3f19d1eec3847b7b0d9: Status 404 returned error can't find the container with id d08c255e55bfcf22b7e860035f747abaae04964ffa8af3f19d1eec3847b7b0d9 Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.841996 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-tbgzf"] Jan 29 08:59:29 crc kubenswrapper[5031]: I0129 08:59:29.992132 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cc85969c8-jq8bn"] Jan 29 08:59:30 crc kubenswrapper[5031]: W0129 08:59:30.000801 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb00e3d5c_e648_43d7_a014_815c0dcff26f.slice/crio-5fe688519704751ebfd8bdb88fe198b008882f041389f73ab102f4674d48cbc9 WatchSource:0}: Error finding container 5fe688519704751ebfd8bdb88fe198b008882f041389f73ab102f4674d48cbc9: Status 404 returned error can't find the container with id 5fe688519704751ebfd8bdb88fe198b008882f041389f73ab102f4674d48cbc9 Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.403708 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2cea483-4915-4fd9-8b38-e257ec143e34" path="/var/lib/kubelet/pods/f2cea483-4915-4fd9-8b38-e257ec143e34/volumes" Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.467466 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86875b9f7-r8mj8" event={"ID":"2769fca4-758e-4f92-a514-a70ca7cb0b5a","Type":"ContainerStarted","Data":"1b49e56ca0849b6d93c1a8813a1ee3c2d8bc25b712603aacbc686a8d5b34f32d"} Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.475464 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc85969c8-jq8bn" event={"ID":"b00e3d5c-e648-43d7-a014-815c0dcff26f","Type":"ContainerStarted","Data":"4dbfe1c48587b57a3581dae11ed7e422649b9577ce4f55d55a47f100e0a83855"} Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.475536 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc85969c8-jq8bn" event={"ID":"b00e3d5c-e648-43d7-a014-815c0dcff26f","Type":"ContainerStarted","Data":"5fe688519704751ebfd8bdb88fe198b008882f041389f73ab102f4674d48cbc9"} Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.478606 5031 generic.go:334] "Generic (PLEG): container finished" podID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerID="771c84e67ea19821caf96253b217f468821f5b4358ccf3e19ebe76dce3f315ae" exitCode=0 Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.478738 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" event={"ID":"4be37031-a33c-4ebf-977e-a463b2fe3762","Type":"ContainerDied","Data":"771c84e67ea19821caf96253b217f468821f5b4358ccf3e19ebe76dce3f315ae"} Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.478773 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" event={"ID":"4be37031-a33c-4ebf-977e-a463b2fe3762","Type":"ContainerStarted","Data":"d08c255e55bfcf22b7e860035f747abaae04964ffa8af3f19d1eec3847b7b0d9"} Jan 29 08:59:30 crc kubenswrapper[5031]: I0129 08:59:30.482654 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" event={"ID":"74ae4456-e53d-410e-931c-108d9b79177f","Type":"ContainerStarted","Data":"21dc1e90f18a774f9a5481300f2dd49fde51af04781101cd1fd2e29c50b5ab71"} Jan 29 08:59:31 crc kubenswrapper[5031]: I0129 08:59:31.494564 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc85969c8-jq8bn" event={"ID":"b00e3d5c-e648-43d7-a014-815c0dcff26f","Type":"ContainerStarted","Data":"2870feb6f68ec7fd46746b113bb8d2857881d1c5348371a8408a371d7445dc42"} Jan 29 08:59:31 crc kubenswrapper[5031]: I0129 08:59:31.494931 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:31 crc kubenswrapper[5031]: I0129 08:59:31.494954 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:31 crc kubenswrapper[5031]: I0129 08:59:31.520332 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cc85969c8-jq8bn" podStartSLOduration=3.520311904 podStartE2EDuration="3.520311904s" podCreationTimestamp="2026-01-29 08:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:31.517756586 +0000 UTC m=+1252.017344538" watchObservedRunningTime="2026-01-29 08:59:31.520311904 +0000 UTC m=+1252.019899856" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.146786 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f47855b9d-vl7rl"] Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.148513 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.156555 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.157797 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.165830 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f47855b9d-vl7rl"] Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.244823 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-combined-ca-bundle\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.244890 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-config-data-custom\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.244909 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-public-tls-certs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.244929 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d945c8-336c-4683-8e04-2dd0de48b0ee-logs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.244984 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-config-data\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.245072 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-internal-tls-certs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.245094 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrd7\" (UniqueName: \"kubernetes.io/projected/f5d945c8-336c-4683-8e04-2dd0de48b0ee-kube-api-access-szrd7\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.347041 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-combined-ca-bundle\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.347329 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-config-data-custom\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.347444 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-public-tls-certs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.347556 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d945c8-336c-4683-8e04-2dd0de48b0ee-logs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.347680 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-config-data\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.347839 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-internal-tls-certs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.347917 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szrd7\" (UniqueName: \"kubernetes.io/projected/f5d945c8-336c-4683-8e04-2dd0de48b0ee-kube-api-access-szrd7\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.348254 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d945c8-336c-4683-8e04-2dd0de48b0ee-logs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.351969 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-public-tls-certs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.353162 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-combined-ca-bundle\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.353302 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-internal-tls-certs\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.356005 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-config-data-custom\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.357840 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d945c8-336c-4683-8e04-2dd0de48b0ee-config-data\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.372599 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrd7\" (UniqueName: \"kubernetes.io/projected/f5d945c8-336c-4683-8e04-2dd0de48b0ee-kube-api-access-szrd7\") pod \"barbican-api-7f47855b9d-vl7rl\" (UID: \"f5d945c8-336c-4683-8e04-2dd0de48b0ee\") " pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.482437 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.527568 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" event={"ID":"4be37031-a33c-4ebf-977e-a463b2fe3762","Type":"ContainerStarted","Data":"8d03d7d9c5ca1645e0ba3a9245a76adc4f0c3236cfeb9f41264acc847b591eef"} Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.533005 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" event={"ID":"74ae4456-e53d-410e-931c-108d9b79177f","Type":"ContainerStarted","Data":"10629a188b88072d3b82b6d5853984271689a59521cf2ca90fd4bb3c82d7d3a9"} Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.533068 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" event={"ID":"74ae4456-e53d-410e-931c-108d9b79177f","Type":"ContainerStarted","Data":"d5cb5bfb6a5c14e158e4fe7a1ad6d7e04d9950dc9d72b103d6217a811ace7c8c"} Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.536354 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86875b9f7-r8mj8" event={"ID":"2769fca4-758e-4f92-a514-a70ca7cb0b5a","Type":"ContainerStarted","Data":"e93cc359a512c05a1a7f02834092deba2967fb32ce8098829e5040da2d27d0fd"} Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.536401 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86875b9f7-r8mj8" event={"ID":"2769fca4-758e-4f92-a514-a70ca7cb0b5a","Type":"ContainerStarted","Data":"ab70d11bcc743a3805048e802e64c03e46239e609121933f3225b3392df658ad"} Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.567506 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" podStartSLOduration=4.567485177 podStartE2EDuration="4.567485177s" podCreationTimestamp="2026-01-29 08:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:32.559607525 +0000 UTC m=+1253.059195497" watchObservedRunningTime="2026-01-29 08:59:32.567485177 +0000 UTC m=+1253.067073129" Jan 29 08:59:32 crc kubenswrapper[5031]: I0129 08:59:32.586671 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-86875b9f7-r8mj8" podStartSLOduration=2.554604118 podStartE2EDuration="4.586655362s" podCreationTimestamp="2026-01-29 08:59:28 +0000 UTC" firstStartedPulling="2026-01-29 08:59:29.608042665 +0000 UTC m=+1250.107630617" lastFinishedPulling="2026-01-29 08:59:31.640093909 +0000 UTC m=+1252.139681861" observedRunningTime="2026-01-29 08:59:32.584986117 +0000 UTC m=+1253.084574079" watchObservedRunningTime="2026-01-29 08:59:32.586655362 +0000 UTC m=+1253.086243314" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.016949 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-685b68c5cb-gfkqk" podStartSLOduration=3.1178953050000002 podStartE2EDuration="5.016924849s" podCreationTimestamp="2026-01-29 08:59:28 +0000 UTC" firstStartedPulling="2026-01-29 08:59:29.742274428 +0000 UTC m=+1250.241862380" lastFinishedPulling="2026-01-29 08:59:31.641303972 +0000 UTC m=+1252.140891924" observedRunningTime="2026-01-29 08:59:32.62572336 +0000 UTC m=+1253.125311302" watchObservedRunningTime="2026-01-29 08:59:33.016924849 +0000 UTC m=+1253.516512811" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.024060 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f47855b9d-vl7rl"] Jan 29 08:59:33 crc kubenswrapper[5031]: W0129 08:59:33.030566 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5d945c8_336c_4683_8e04_2dd0de48b0ee.slice/crio-d5d56c9bbe05e10e5c73b6f83e5fb3c455b51c1aba8d6abe8d3e1f2a75708240 WatchSource:0}: Error finding container d5d56c9bbe05e10e5c73b6f83e5fb3c455b51c1aba8d6abe8d3e1f2a75708240: Status 404 returned error can't find the container with id d5d56c9bbe05e10e5c73b6f83e5fb3c455b51c1aba8d6abe8d3e1f2a75708240 Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.281203 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.470792 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csw2s\" (UniqueName: \"kubernetes.io/projected/8a6627fe-c450-4d80-ace6-085f7811d3b5-kube-api-access-csw2s\") pod \"8a6627fe-c450-4d80-ace6-085f7811d3b5\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.471301 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-config-data\") pod \"8a6627fe-c450-4d80-ace6-085f7811d3b5\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.471358 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-run-httpd\") pod \"8a6627fe-c450-4d80-ace6-085f7811d3b5\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.471439 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-scripts\") pod \"8a6627fe-c450-4d80-ace6-085f7811d3b5\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.471490 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-log-httpd\") pod \"8a6627fe-c450-4d80-ace6-085f7811d3b5\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.471729 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8a6627fe-c450-4d80-ace6-085f7811d3b5" (UID: "8a6627fe-c450-4d80-ace6-085f7811d3b5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.472165 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-sg-core-conf-yaml\") pod \"8a6627fe-c450-4d80-ace6-085f7811d3b5\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.472566 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-combined-ca-bundle\") pod \"8a6627fe-c450-4d80-ace6-085f7811d3b5\" (UID: \"8a6627fe-c450-4d80-ace6-085f7811d3b5\") " Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.472599 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8a6627fe-c450-4d80-ace6-085f7811d3b5" (UID: "8a6627fe-c450-4d80-ace6-085f7811d3b5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.473230 5031 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.473248 5031 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a6627fe-c450-4d80-ace6-085f7811d3b5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.493643 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a6627fe-c450-4d80-ace6-085f7811d3b5-kube-api-access-csw2s" (OuterVolumeSpecName: "kube-api-access-csw2s") pod "8a6627fe-c450-4d80-ace6-085f7811d3b5" (UID: "8a6627fe-c450-4d80-ace6-085f7811d3b5"). InnerVolumeSpecName "kube-api-access-csw2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.506392 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-scripts" (OuterVolumeSpecName: "scripts") pod "8a6627fe-c450-4d80-ace6-085f7811d3b5" (UID: "8a6627fe-c450-4d80-ace6-085f7811d3b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.574518 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.574548 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csw2s\" (UniqueName: \"kubernetes.io/projected/8a6627fe-c450-4d80-ace6-085f7811d3b5-kube-api-access-csw2s\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.613544 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8a6627fe-c450-4d80-ace6-085f7811d3b5" (UID: "8a6627fe-c450-4d80-ace6-085f7811d3b5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.641724 5031 generic.go:334] "Generic (PLEG): container finished" podID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerID="7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae" exitCode=0 Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.641822 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerDied","Data":"7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae"} Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.641850 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a6627fe-c450-4d80-ace6-085f7811d3b5","Type":"ContainerDied","Data":"ce55193983fd24c8f4040fff7e43acf6c5f03a885d681dec5fef77ec13239ef6"} Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.641867 5031 scope.go:117] "RemoveContainer" containerID="7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.642055 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.660869 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a6627fe-c450-4d80-ace6-085f7811d3b5" (UID: "8a6627fe-c450-4d80-ace6-085f7811d3b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.663544 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-config-data" (OuterVolumeSpecName: "config-data") pod "8a6627fe-c450-4d80-ace6-085f7811d3b5" (UID: "8a6627fe-c450-4d80-ace6-085f7811d3b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.673302 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f47855b9d-vl7rl" event={"ID":"f5d945c8-336c-4683-8e04-2dd0de48b0ee","Type":"ContainerStarted","Data":"42edac6ffe52830d0b7fdc091d6ef30ebe044bb1386b1992c2aac59f5291e587"} Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.673344 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f47855b9d-vl7rl" event={"ID":"f5d945c8-336c-4683-8e04-2dd0de48b0ee","Type":"ContainerStarted","Data":"f15601716096028f9652269cdcb216e97b0dee76a6b764aa105084d46d8df725"} Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.673376 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f47855b9d-vl7rl" event={"ID":"f5d945c8-336c-4683-8e04-2dd0de48b0ee","Type":"ContainerStarted","Data":"d5d56c9bbe05e10e5c73b6f83e5fb3c455b51c1aba8d6abe8d3e1f2a75708240"} Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.673849 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.674260 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.674286 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.683704 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.683749 5031 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.683762 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6627fe-c450-4d80-ace6-085f7811d3b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.691534 5031 scope.go:117] "RemoveContainer" containerID="73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.713475 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f47855b9d-vl7rl" podStartSLOduration=1.713459312 podStartE2EDuration="1.713459312s" podCreationTimestamp="2026-01-29 08:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:33.710134022 +0000 UTC m=+1254.209721974" watchObservedRunningTime="2026-01-29 08:59:33.713459312 +0000 UTC m=+1254.213047264" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.755291 5031 scope.go:117] "RemoveContainer" containerID="7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.771985 5031 scope.go:117] "RemoveContainer" containerID="413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.797904 5031 scope.go:117] "RemoveContainer" containerID="7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430" Jan 29 08:59:33 crc kubenswrapper[5031]: E0129 08:59:33.798272 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430\": container with ID starting with 7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430 not found: ID does not exist" containerID="7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.798303 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430"} err="failed to get container status \"7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430\": rpc error: code = NotFound desc = could not find container \"7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430\": container with ID starting with 7bf18d0a2f5be408679415d7c418bb384ab8b04bbd535db0a628bbd8b1b88430 not found: ID does not exist" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.798326 5031 scope.go:117] "RemoveContainer" containerID="73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d" Jan 29 08:59:33 crc kubenswrapper[5031]: E0129 08:59:33.798792 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d\": container with ID starting with 73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d not found: ID does not exist" containerID="73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.798836 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d"} err="failed to get container status \"73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d\": rpc error: code = NotFound desc = could not find container \"73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d\": container with ID starting with 73bfb14d7d31f8f322eb4dc02435712d4e1e0374027b571bda505a66c6de1e7d not found: ID does not exist" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.798867 5031 scope.go:117] "RemoveContainer" containerID="7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae" Jan 29 08:59:33 crc kubenswrapper[5031]: E0129 08:59:33.799176 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae\": container with ID starting with 7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae not found: ID does not exist" containerID="7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.799204 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae"} err="failed to get container status \"7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae\": rpc error: code = NotFound desc = could not find container \"7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae\": container with ID starting with 7aa72bcdb0a1c13a4b00e6eaa82c9fbe10d5765b7cb3cc099234d44c5c76b9ae not found: ID does not exist" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.799226 5031 scope.go:117] "RemoveContainer" containerID="413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94" Jan 29 08:59:33 crc kubenswrapper[5031]: E0129 08:59:33.799597 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94\": container with ID starting with 413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94 not found: ID does not exist" containerID="413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.799624 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94"} err="failed to get container status \"413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94\": rpc error: code = NotFound desc = could not find container \"413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94\": container with ID starting with 413ffa86c678f8abf0f8442221960356df0e721c247a907b02708c89e28e7b94 not found: ID does not exist" Jan 29 08:59:33 crc kubenswrapper[5031]: I0129 08:59:33.994279 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.020488 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.039951 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:59:34 crc kubenswrapper[5031]: E0129 08:59:34.040450 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-central-agent" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040467 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-central-agent" Jan 29 08:59:34 crc kubenswrapper[5031]: E0129 08:59:34.040487 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="proxy-httpd" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040494 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="proxy-httpd" Jan 29 08:59:34 crc kubenswrapper[5031]: E0129 08:59:34.040510 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-notification-agent" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040518 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-notification-agent" Jan 29 08:59:34 crc kubenswrapper[5031]: E0129 08:59:34.040533 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="sg-core" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040539 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="sg-core" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040696 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="sg-core" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040709 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-central-agent" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040727 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="proxy-httpd" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.040736 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" containerName="ceilometer-notification-agent" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.042468 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.046464 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.046635 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.050713 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.194468 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-log-httpd\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.194886 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7ld\" (UniqueName: \"kubernetes.io/projected/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-kube-api-access-tq7ld\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.194929 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.194989 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.195051 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-config-data\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.195123 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-run-httpd\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.195170 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-scripts\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296170 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a6627fe-c450-4d80-ace6-085f7811d3b5" path="/var/lib/kubelet/pods/8a6627fe-c450-4d80-ace6-085f7811d3b5/volumes" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296243 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296290 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-config-data\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296346 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-run-httpd\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296402 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-scripts\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296432 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-log-httpd\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296458 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq7ld\" (UniqueName: \"kubernetes.io/projected/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-kube-api-access-tq7ld\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.296483 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.297010 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-run-httpd\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.297069 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-log-httpd\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.303949 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-config-data\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.305148 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.305434 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.319170 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq7ld\" (UniqueName: \"kubernetes.io/projected/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-kube-api-access-tq7ld\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.319693 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-scripts\") pod \"ceilometer-0\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.375269 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:59:34 crc kubenswrapper[5031]: I0129 08:59:34.827756 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:59:35 crc kubenswrapper[5031]: I0129 08:59:35.695658 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerStarted","Data":"87d3261c8975d4641cc5f5bf6ee2f291333c7dc96618bcf8fa5e93d6edde9427"} Jan 29 08:59:36 crc kubenswrapper[5031]: I0129 08:59:36.710706 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerStarted","Data":"ef5339f02f08c6ec9fdd23f814874e91bb3839224903f0bdd175ba3e7e2d5190"} Jan 29 08:59:36 crc kubenswrapper[5031]: I0129 08:59:36.711048 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerStarted","Data":"d0acc8f922ce3e6573c47a4e9d4f8fbb52b70a5f12ecfe9617eaaf0031c8b56d"} Jan 29 08:59:36 crc kubenswrapper[5031]: I0129 08:59:36.714499 5031 generic.go:334] "Generic (PLEG): container finished" podID="997a6082-d87d-4954-b383-9b27e161be4e" containerID="1a3c80a7fac4c3bb26e4b14f65137c3093a803a958279eab49d5317379606b7d" exitCode=0 Jan 29 08:59:36 crc kubenswrapper[5031]: I0129 08:59:36.714562 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xg72z" event={"ID":"997a6082-d87d-4954-b383-9b27e161be4e","Type":"ContainerDied","Data":"1a3c80a7fac4c3bb26e4b14f65137c3093a803a958279eab49d5317379606b7d"} Jan 29 08:59:37 crc kubenswrapper[5031]: I0129 08:59:37.725015 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerStarted","Data":"0e38b14a2b6d58ca5e46e29ca7b95ec0492744ff85306bc2b331e52ff1b4dd47"} Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.191880 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xg72z" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.373609 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/997a6082-d87d-4954-b383-9b27e161be4e-etc-machine-id\") pod \"997a6082-d87d-4954-b383-9b27e161be4e\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.373704 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997a6082-d87d-4954-b383-9b27e161be4e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "997a6082-d87d-4954-b383-9b27e161be4e" (UID: "997a6082-d87d-4954-b383-9b27e161be4e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.373738 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-combined-ca-bundle\") pod \"997a6082-d87d-4954-b383-9b27e161be4e\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.373834 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm4fk\" (UniqueName: \"kubernetes.io/projected/997a6082-d87d-4954-b383-9b27e161be4e-kube-api-access-bm4fk\") pod \"997a6082-d87d-4954-b383-9b27e161be4e\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.373904 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-scripts\") pod \"997a6082-d87d-4954-b383-9b27e161be4e\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.373974 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-config-data\") pod \"997a6082-d87d-4954-b383-9b27e161be4e\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.374395 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-db-sync-config-data\") pod \"997a6082-d87d-4954-b383-9b27e161be4e\" (UID: \"997a6082-d87d-4954-b383-9b27e161be4e\") " Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.375331 5031 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/997a6082-d87d-4954-b383-9b27e161be4e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.381274 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997a6082-d87d-4954-b383-9b27e161be4e-kube-api-access-bm4fk" (OuterVolumeSpecName: "kube-api-access-bm4fk") pod "997a6082-d87d-4954-b383-9b27e161be4e" (UID: "997a6082-d87d-4954-b383-9b27e161be4e"). InnerVolumeSpecName "kube-api-access-bm4fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.393590 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-scripts" (OuterVolumeSpecName: "scripts") pod "997a6082-d87d-4954-b383-9b27e161be4e" (UID: "997a6082-d87d-4954-b383-9b27e161be4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.396550 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "997a6082-d87d-4954-b383-9b27e161be4e" (UID: "997a6082-d87d-4954-b383-9b27e161be4e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.406654 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "997a6082-d87d-4954-b383-9b27e161be4e" (UID: "997a6082-d87d-4954-b383-9b27e161be4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.423684 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-config-data" (OuterVolumeSpecName: "config-data") pod "997a6082-d87d-4954-b383-9b27e161be4e" (UID: "997a6082-d87d-4954-b383-9b27e161be4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.476854 5031 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.476903 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.476917 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm4fk\" (UniqueName: \"kubernetes.io/projected/997a6082-d87d-4954-b383-9b27e161be4e-kube-api-access-bm4fk\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.476929 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.476942 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/997a6082-d87d-4954-b383-9b27e161be4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.744261 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xg72z" event={"ID":"997a6082-d87d-4954-b383-9b27e161be4e","Type":"ContainerDied","Data":"42bfc308da7d9c2d7f5c49bf6f0dc7a6bb9655115009dcc4b3ea5ac962689585"} Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.744315 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42bfc308da7d9c2d7f5c49bf6f0dc7a6bb9655115009dcc4b3ea5ac962689585" Jan 29 08:59:38 crc kubenswrapper[5031]: I0129 08:59:38.744425 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xg72z" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.117508 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.133745 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:39 crc kubenswrapper[5031]: E0129 08:59:39.134124 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997a6082-d87d-4954-b383-9b27e161be4e" containerName="cinder-db-sync" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.134142 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="997a6082-d87d-4954-b383-9b27e161be4e" containerName="cinder-db-sync" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.134315 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="997a6082-d87d-4954-b383-9b27e161be4e" containerName="cinder-db-sync" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.135246 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.140889 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.141861 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.157156 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9qwcc" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.157992 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.205705 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.206416 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34de6294-aaa2-4fe1-9179-40ee89555f2b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.206586 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-scripts\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.206700 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxbqf\" (UniqueName: \"kubernetes.io/projected/34de6294-aaa2-4fe1-9179-40ee89555f2b-kube-api-access-sxbqf\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.206788 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.206874 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.242643 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.309227 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34de6294-aaa2-4fe1-9179-40ee89555f2b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.309274 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.309310 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-scripts\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.309336 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxbqf\" (UniqueName: \"kubernetes.io/projected/34de6294-aaa2-4fe1-9179-40ee89555f2b-kube-api-access-sxbqf\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.309355 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.309390 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.310790 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34de6294-aaa2-4fe1-9179-40ee89555f2b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.319982 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.324871 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-scripts\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.329515 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-4zmh2"] Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.329786 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" podUID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerName="dnsmasq-dns" containerID="cri-o://4c401296a56dd6d6cf6c2c94367ce8759f2fd6edb77e3ee72ea715409a1f89c8" gracePeriod=10 Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.370601 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.370766 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.403698 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxbqf\" (UniqueName: \"kubernetes.io/projected/34de6294-aaa2-4fe1-9179-40ee89555f2b-kube-api-access-sxbqf\") pod \"cinder-scheduler-0\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.403769 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-9g9jl"] Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.405255 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.414651 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.414705 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrp9\" (UniqueName: \"kubernetes.io/projected/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-kube-api-access-qvrp9\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.414740 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-config\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.414769 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.414841 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-dns-svc\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.457526 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-9g9jl"] Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.515877 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-dns-svc\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.515941 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.515979 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvrp9\" (UniqueName: \"kubernetes.io/projected/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-kube-api-access-qvrp9\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.516024 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-config\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.516047 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.516987 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.518044 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-dns-svc\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.518949 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.519323 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-config\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.570938 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.582541 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.588665 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.603763 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.603853 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.605390 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvrp9\" (UniqueName: \"kubernetes.io/projected/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-kube-api-access-qvrp9\") pod \"dnsmasq-dns-58db5546cc-9g9jl\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.608937 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.724092 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbngg\" (UniqueName: \"kubernetes.io/projected/7d49ef4b-1b59-4d57-8825-4f26640be6d1-kube-api-access-wbngg\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.724149 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.724185 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d49ef4b-1b59-4d57-8825-4f26640be6d1-logs\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.724221 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.724311 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d49ef4b-1b59-4d57-8825-4f26640be6d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.724337 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-scripts\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.724420 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.802647 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerStarted","Data":"993d1d49a7bdbe162faa239bb0d618dd312efe58b701eec85db6ae4348afba00"} Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.805008 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.827782 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d49ef4b-1b59-4d57-8825-4f26640be6d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.828214 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-scripts\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.828421 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.828565 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbngg\" (UniqueName: \"kubernetes.io/projected/7d49ef4b-1b59-4d57-8825-4f26640be6d1-kube-api-access-wbngg\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.828684 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.828823 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d49ef4b-1b59-4d57-8825-4f26640be6d1-logs\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.828937 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.845875 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d49ef4b-1b59-4d57-8825-4f26640be6d1-logs\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.846379 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d49ef4b-1b59-4d57-8825-4f26640be6d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.847085 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.847482 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.851948 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-scripts\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.853075 5031 generic.go:334] "Generic (PLEG): container finished" podID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerID="4c401296a56dd6d6cf6c2c94367ce8759f2fd6edb77e3ee72ea715409a1f89c8" exitCode=0 Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.853137 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" event={"ID":"ea6129e9-5206-488e-85f5-2ffccb4dd28b","Type":"ContainerDied","Data":"4c401296a56dd6d6cf6c2c94367ce8759f2fd6edb77e3ee72ea715409a1f89c8"} Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.855400 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.877349 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbngg\" (UniqueName: \"kubernetes.io/projected/7d49ef4b-1b59-4d57-8825-4f26640be6d1-kube-api-access-wbngg\") pod \"cinder-api-0\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " pod="openstack/cinder-api-0" Jan 29 08:59:39 crc kubenswrapper[5031]: I0129 08:59:39.942612 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.320685 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.368725 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.121261519 podStartE2EDuration="7.368704096s" podCreationTimestamp="2026-01-29 08:59:33 +0000 UTC" firstStartedPulling="2026-01-29 08:59:34.833846559 +0000 UTC m=+1255.333434511" lastFinishedPulling="2026-01-29 08:59:39.081289136 +0000 UTC m=+1259.580877088" observedRunningTime="2026-01-29 08:59:39.844380285 +0000 UTC m=+1260.343968237" watchObservedRunningTime="2026-01-29 08:59:40.368704096 +0000 UTC m=+1260.868292048" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.450160 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-dns-svc\") pod \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.450250 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-sb\") pod \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.450315 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-config\") pod \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.450404 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4h88\" (UniqueName: \"kubernetes.io/projected/ea6129e9-5206-488e-85f5-2ffccb4dd28b-kube-api-access-r4h88\") pod \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.450433 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-nb\") pod \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\" (UID: \"ea6129e9-5206-488e-85f5-2ffccb4dd28b\") " Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.495786 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6129e9-5206-488e-85f5-2ffccb4dd28b-kube-api-access-r4h88" (OuterVolumeSpecName: "kube-api-access-r4h88") pod "ea6129e9-5206-488e-85f5-2ffccb4dd28b" (UID: "ea6129e9-5206-488e-85f5-2ffccb4dd28b"). InnerVolumeSpecName "kube-api-access-r4h88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.521152 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.555740 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4h88\" (UniqueName: \"kubernetes.io/projected/ea6129e9-5206-488e-85f5-2ffccb4dd28b-kube-api-access-r4h88\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.579987 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea6129e9-5206-488e-85f5-2ffccb4dd28b" (UID: "ea6129e9-5206-488e-85f5-2ffccb4dd28b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.589751 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea6129e9-5206-488e-85f5-2ffccb4dd28b" (UID: "ea6129e9-5206-488e-85f5-2ffccb4dd28b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.608672 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea6129e9-5206-488e-85f5-2ffccb4dd28b" (UID: "ea6129e9-5206-488e-85f5-2ffccb4dd28b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.657594 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.657635 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.657648 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.661052 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-config" (OuterVolumeSpecName: "config") pod "ea6129e9-5206-488e-85f5-2ffccb4dd28b" (UID: "ea6129e9-5206-488e-85f5-2ffccb4dd28b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.761767 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6129e9-5206-488e-85f5-2ffccb4dd28b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.787708 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-9g9jl"] Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.900973 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34de6294-aaa2-4fe1-9179-40ee89555f2b","Type":"ContainerStarted","Data":"4607f85f8753e087231374a735afd11b7445864fbabd2402802e719783cb8ba1"} Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.902001 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" event={"ID":"dabf38f1-9d5a-48fc-a84c-b97c108e4a36","Type":"ContainerStarted","Data":"eaee5adecf4269fa7ab157f93a02e7440494ac2800d01eb54a2315da0d6c595d"} Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.910551 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.911431 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-4zmh2" event={"ID":"ea6129e9-5206-488e-85f5-2ffccb4dd28b","Type":"ContainerDied","Data":"9a01519f33afd5753c749dd84a5b463efde44d3f2a4051d3564ae77d7556f566"} Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.911472 5031 scope.go:117] "RemoveContainer" containerID="4c401296a56dd6d6cf6c2c94367ce8759f2fd6edb77e3ee72ea715409a1f89c8" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.969232 5031 scope.go:117] "RemoveContainer" containerID="05263ab28ed137fbff86ffd33d166008d453c505e7f1ec75554c7b0b7cba2354" Jan 29 08:59:40 crc kubenswrapper[5031]: I0129 08:59:40.981822 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:41 crc kubenswrapper[5031]: I0129 08:59:41.110182 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-4zmh2"] Jan 29 08:59:41 crc kubenswrapper[5031]: I0129 08:59:41.124403 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-4zmh2"] Jan 29 08:59:41 crc kubenswrapper[5031]: I0129 08:59:41.708072 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:41 crc kubenswrapper[5031]: I0129 08:59:41.934472 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d49ef4b-1b59-4d57-8825-4f26640be6d1","Type":"ContainerStarted","Data":"77458b47832f9f5e71a8d74be85551b9942be807f60c2a589a4d6c65e779cd30"} Jan 29 08:59:41 crc kubenswrapper[5031]: I0129 08:59:41.939866 5031 generic.go:334] "Generic (PLEG): container finished" podID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerID="3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e" exitCode=0 Jan 29 08:59:41 crc kubenswrapper[5031]: I0129 08:59:41.941333 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" event={"ID":"dabf38f1-9d5a-48fc-a84c-b97c108e4a36","Type":"ContainerDied","Data":"3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e"} Jan 29 08:59:42 crc kubenswrapper[5031]: I0129 08:59:42.130106 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:42 crc kubenswrapper[5031]: I0129 08:59:42.294891 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" path="/var/lib/kubelet/pods/ea6129e9-5206-488e-85f5-2ffccb4dd28b/volumes" Jan 29 08:59:42 crc kubenswrapper[5031]: I0129 08:59:42.957637 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d49ef4b-1b59-4d57-8825-4f26640be6d1","Type":"ContainerStarted","Data":"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177"} Jan 29 08:59:42 crc kubenswrapper[5031]: I0129 08:59:42.962719 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" event={"ID":"dabf38f1-9d5a-48fc-a84c-b97c108e4a36","Type":"ContainerStarted","Data":"7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1"} Jan 29 08:59:42 crc kubenswrapper[5031]: I0129 08:59:42.962811 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:42 crc kubenswrapper[5031]: I0129 08:59:42.970018 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34de6294-aaa2-4fe1-9179-40ee89555f2b","Type":"ContainerStarted","Data":"5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2"} Jan 29 08:59:42 crc kubenswrapper[5031]: I0129 08:59:42.992534 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" podStartSLOduration=3.992511041 podStartE2EDuration="3.992511041s" podCreationTimestamp="2026-01-29 08:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:42.978929796 +0000 UTC m=+1263.478517768" watchObservedRunningTime="2026-01-29 08:59:42.992511041 +0000 UTC m=+1263.492099003" Jan 29 08:59:43 crc kubenswrapper[5031]: I0129 08:59:43.430225 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:43 crc kubenswrapper[5031]: I0129 08:59:43.984430 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34de6294-aaa2-4fe1-9179-40ee89555f2b","Type":"ContainerStarted","Data":"ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77"} Jan 29 08:59:43 crc kubenswrapper[5031]: I0129 08:59:43.990977 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d49ef4b-1b59-4d57-8825-4f26640be6d1","Type":"ContainerStarted","Data":"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9"} Jan 29 08:59:43 crc kubenswrapper[5031]: I0129 08:59:43.991175 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api-log" containerID="cri-o://58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177" gracePeriod=30 Jan 29 08:59:43 crc kubenswrapper[5031]: I0129 08:59:43.991309 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 08:59:43 crc kubenswrapper[5031]: I0129 08:59:43.991355 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api" containerID="cri-o://ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9" gracePeriod=30 Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.024102 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.147242333 podStartE2EDuration="5.024080265s" podCreationTimestamp="2026-01-29 08:59:39 +0000 UTC" firstStartedPulling="2026-01-29 08:59:40.559999049 +0000 UTC m=+1261.059587001" lastFinishedPulling="2026-01-29 08:59:41.436836981 +0000 UTC m=+1261.936424933" observedRunningTime="2026-01-29 08:59:44.013271415 +0000 UTC m=+1264.512859367" watchObservedRunningTime="2026-01-29 08:59:44.024080265 +0000 UTC m=+1264.523668217" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.070349 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.070331086 podStartE2EDuration="5.070331086s" podCreationTimestamp="2026-01-29 08:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:44.065568458 +0000 UTC m=+1264.565156410" watchObservedRunningTime="2026-01-29 08:59:44.070331086 +0000 UTC m=+1264.569919038" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.604945 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.811694 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.979322 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-combined-ca-bundle\") pod \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.979422 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbngg\" (UniqueName: \"kubernetes.io/projected/7d49ef4b-1b59-4d57-8825-4f26640be6d1-kube-api-access-wbngg\") pod \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.979546 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d49ef4b-1b59-4d57-8825-4f26640be6d1-etc-machine-id\") pod \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.979574 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-scripts\") pod \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.979637 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d49ef4b-1b59-4d57-8825-4f26640be6d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7d49ef4b-1b59-4d57-8825-4f26640be6d1" (UID: "7d49ef4b-1b59-4d57-8825-4f26640be6d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.979696 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d49ef4b-1b59-4d57-8825-4f26640be6d1-logs\") pod \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.980083 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d49ef4b-1b59-4d57-8825-4f26640be6d1-logs" (OuterVolumeSpecName: "logs") pod "7d49ef4b-1b59-4d57-8825-4f26640be6d1" (UID: "7d49ef4b-1b59-4d57-8825-4f26640be6d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.980157 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data-custom\") pod \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.980535 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data\") pod \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\" (UID: \"7d49ef4b-1b59-4d57-8825-4f26640be6d1\") " Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.980973 5031 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d49ef4b-1b59-4d57-8825-4f26640be6d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.980991 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d49ef4b-1b59-4d57-8825-4f26640be6d1-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.989603 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d49ef4b-1b59-4d57-8825-4f26640be6d1-kube-api-access-wbngg" (OuterVolumeSpecName: "kube-api-access-wbngg") pod "7d49ef4b-1b59-4d57-8825-4f26640be6d1" (UID: "7d49ef4b-1b59-4d57-8825-4f26640be6d1"). InnerVolumeSpecName "kube-api-access-wbngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.993509 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-scripts" (OuterVolumeSpecName: "scripts") pod "7d49ef4b-1b59-4d57-8825-4f26640be6d1" (UID: "7d49ef4b-1b59-4d57-8825-4f26640be6d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:44 crc kubenswrapper[5031]: I0129 08:59:44.996501 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d49ef4b-1b59-4d57-8825-4f26640be6d1" (UID: "7d49ef4b-1b59-4d57-8825-4f26640be6d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.023478 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d49ef4b-1b59-4d57-8825-4f26640be6d1" (UID: "7d49ef4b-1b59-4d57-8825-4f26640be6d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.039642 5031 generic.go:334] "Generic (PLEG): container finished" podID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerID="ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9" exitCode=0 Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.039679 5031 generic.go:334] "Generic (PLEG): container finished" podID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerID="58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177" exitCode=143 Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.040685 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.041112 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d49ef4b-1b59-4d57-8825-4f26640be6d1","Type":"ContainerDied","Data":"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9"} Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.041143 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d49ef4b-1b59-4d57-8825-4f26640be6d1","Type":"ContainerDied","Data":"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177"} Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.041154 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d49ef4b-1b59-4d57-8825-4f26640be6d1","Type":"ContainerDied","Data":"77458b47832f9f5e71a8d74be85551b9942be807f60c2a589a4d6c65e779cd30"} Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.041172 5031 scope.go:117] "RemoveContainer" containerID="ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.062469 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data" (OuterVolumeSpecName: "config-data") pod "7d49ef4b-1b59-4d57-8825-4f26640be6d1" (UID: "7d49ef4b-1b59-4d57-8825-4f26640be6d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.083271 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.083318 5031 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.083335 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.083349 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49ef4b-1b59-4d57-8825-4f26640be6d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.083379 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbngg\" (UniqueName: \"kubernetes.io/projected/7d49ef4b-1b59-4d57-8825-4f26640be6d1-kube-api-access-wbngg\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.174330 5031 scope.go:117] "RemoveContainer" containerID="58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.177804 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.193781 5031 scope.go:117] "RemoveContainer" containerID="ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9" Jan 29 08:59:45 crc kubenswrapper[5031]: E0129 08:59:45.194232 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9\": container with ID starting with ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9 not found: ID does not exist" containerID="ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.194269 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9"} err="failed to get container status \"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9\": rpc error: code = NotFound desc = could not find container \"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9\": container with ID starting with ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9 not found: ID does not exist" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.194294 5031 scope.go:117] "RemoveContainer" containerID="58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177" Jan 29 08:59:45 crc kubenswrapper[5031]: E0129 08:59:45.194607 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177\": container with ID starting with 58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177 not found: ID does not exist" containerID="58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.194655 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177"} err="failed to get container status \"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177\": rpc error: code = NotFound desc = could not find container \"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177\": container with ID starting with 58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177 not found: ID does not exist" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.194696 5031 scope.go:117] "RemoveContainer" containerID="ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.197689 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9"} err="failed to get container status \"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9\": rpc error: code = NotFound desc = could not find container \"ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9\": container with ID starting with ee6aee399fc295295eaf867c96c5dcffaf98d90aad3e46275cb712f5d20f52b9 not found: ID does not exist" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.197717 5031 scope.go:117] "RemoveContainer" containerID="58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.197978 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177"} err="failed to get container status \"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177\": rpc error: code = NotFound desc = could not find container \"58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177\": container with ID starting with 58886035aca875c80d2340fd5106a3007e6b99c8f87b3cf28e567bb87e36e177 not found: ID does not exist" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.217588 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f47855b9d-vl7rl" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.294964 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cc85969c8-jq8bn"] Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.295487 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cc85969c8-jq8bn" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api-log" containerID="cri-o://4dbfe1c48587b57a3581dae11ed7e422649b9577ce4f55d55a47f100e0a83855" gracePeriod=30 Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.295959 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cc85969c8-jq8bn" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api" containerID="cri-o://2870feb6f68ec7fd46746b113bb8d2857881d1c5348371a8408a371d7445dc42" gracePeriod=30 Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.419291 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.439816 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452169 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:45 crc kubenswrapper[5031]: E0129 08:59:45.452557 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerName="dnsmasq-dns" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452571 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerName="dnsmasq-dns" Jan 29 08:59:45 crc kubenswrapper[5031]: E0129 08:59:45.452592 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452597 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api" Jan 29 08:59:45 crc kubenswrapper[5031]: E0129 08:59:45.452611 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerName="init" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452617 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerName="init" Jan 29 08:59:45 crc kubenswrapper[5031]: E0129 08:59:45.452635 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api-log" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452640 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api-log" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452805 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452815 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea6129e9-5206-488e-85f5-2ffccb4dd28b" containerName="dnsmasq-dns" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.452832 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" containerName="cinder-api-log" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.453719 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.458581 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.459383 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.462787 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.476586 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606562 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606608 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gwz\" (UniqueName: \"kubernetes.io/projected/2c053401-8bfa-4629-926e-e97653fbb397-kube-api-access-d5gwz\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606639 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-config-data\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606788 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606853 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c053401-8bfa-4629-926e-e97653fbb397-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606895 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c053401-8bfa-4629-926e-e97653fbb397-logs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606915 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.606985 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.607131 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-scripts\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.616171 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.709594 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-scripts\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.709689 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.709723 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gwz\" (UniqueName: \"kubernetes.io/projected/2c053401-8bfa-4629-926e-e97653fbb397-kube-api-access-d5gwz\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.709766 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-config-data\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.710256 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.710737 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c053401-8bfa-4629-926e-e97653fbb397-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.710772 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c053401-8bfa-4629-926e-e97653fbb397-logs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.710788 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.710872 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.711574 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c053401-8bfa-4629-926e-e97653fbb397-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.711909 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c053401-8bfa-4629-926e-e97653fbb397-logs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.726931 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.727197 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.739149 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-scripts\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.740139 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.740563 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.741274 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gwz\" (UniqueName: \"kubernetes.io/projected/2c053401-8bfa-4629-926e-e97653fbb397-kube-api-access-d5gwz\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.742053 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c053401-8bfa-4629-926e-e97653fbb397-config-data\") pod \"cinder-api-0\" (UID: \"2c053401-8bfa-4629-926e-e97653fbb397\") " pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.818019 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:59:45 crc kubenswrapper[5031]: I0129 08:59:45.934240 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.103770 5031 generic.go:334] "Generic (PLEG): container finished" podID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerID="4dbfe1c48587b57a3581dae11ed7e422649b9577ce4f55d55a47f100e0a83855" exitCode=143 Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.104813 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc85969c8-jq8bn" event={"ID":"b00e3d5c-e648-43d7-a014-815c0dcff26f","Type":"ContainerDied","Data":"4dbfe1c48587b57a3581dae11ed7e422649b9577ce4f55d55a47f100e0a83855"} Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.268005 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-97c68858b-9q587" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.297622 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d49ef4b-1b59-4d57-8825-4f26640be6d1" path="/var/lib/kubelet/pods/7d49ef4b-1b59-4d57-8825-4f26640be6d1/volumes" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.448535 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.584197 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c4fdc6744-xx4wj"] Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.586602 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.592385 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c4fdc6744-xx4wj"] Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.739417 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-public-tls-certs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.739497 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-internal-tls-certs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.739528 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e009c8bd-2d71-405b-a166-53cf1451c8f0-logs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.739545 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-combined-ca-bundle\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.739581 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bvsf\" (UniqueName: \"kubernetes.io/projected/e009c8bd-2d71-405b-a166-53cf1451c8f0-kube-api-access-9bvsf\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.739629 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-scripts\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.739663 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-config-data\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.841113 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-public-tls-certs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.841183 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-internal-tls-certs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.841212 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e009c8bd-2d71-405b-a166-53cf1451c8f0-logs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.841232 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-combined-ca-bundle\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.841268 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bvsf\" (UniqueName: \"kubernetes.io/projected/e009c8bd-2d71-405b-a166-53cf1451c8f0-kube-api-access-9bvsf\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.841313 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-scripts\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.841348 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-config-data\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.842197 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e009c8bd-2d71-405b-a166-53cf1451c8f0-logs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.849807 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-public-tls-certs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.850113 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-scripts\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.850654 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-combined-ca-bundle\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.851342 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-config-data\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.864939 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e009c8bd-2d71-405b-a166-53cf1451c8f0-internal-tls-certs\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.865859 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bvsf\" (UniqueName: \"kubernetes.io/projected/e009c8bd-2d71-405b-a166-53cf1451c8f0-kube-api-access-9bvsf\") pod \"placement-6c4fdc6744-xx4wj\" (UID: \"e009c8bd-2d71-405b-a166-53cf1451c8f0\") " pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.919796 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:46 crc kubenswrapper[5031]: I0129 08:59:46.989950 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6b6fcb467b-dc5s8" Jan 29 08:59:47 crc kubenswrapper[5031]: I0129 08:59:47.207663 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c053401-8bfa-4629-926e-e97653fbb397","Type":"ContainerStarted","Data":"11715915d75f4b3b0b3b5c1ca91832d48b5b16d8157e654f378dd0369d35a6c9"} Jan 29 08:59:47 crc kubenswrapper[5031]: I0129 08:59:47.535904 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c4fdc6744-xx4wj"] Jan 29 08:59:47 crc kubenswrapper[5031]: W0129 08:59:47.544717 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode009c8bd_2d71_405b_a166_53cf1451c8f0.slice/crio-a7d341d42ce35aa952aa4016b0b719c8cfd729ff20baff9e76d90f4db43c01d4 WatchSource:0}: Error finding container a7d341d42ce35aa952aa4016b0b719c8cfd729ff20baff9e76d90f4db43c01d4: Status 404 returned error can't find the container with id a7d341d42ce35aa952aa4016b0b719c8cfd729ff20baff9e76d90f4db43c01d4 Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.219022 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c053401-8bfa-4629-926e-e97653fbb397","Type":"ContainerStarted","Data":"2655b1963797c337791c034b7f663583f09605365665fb282f6cdfb0a56b651d"} Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.220695 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4fdc6744-xx4wj" event={"ID":"e009c8bd-2d71-405b-a166-53cf1451c8f0","Type":"ContainerStarted","Data":"5482e43953560c870c25857942332397e59182599aaea727ab04c3fd7fb7bc14"} Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.220718 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4fdc6744-xx4wj" event={"ID":"e009c8bd-2d71-405b-a166-53cf1451c8f0","Type":"ContainerStarted","Data":"0b8a38712214bbb1441687124361e377501db075046d7aaba29df1dc49bb4af3"} Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.220728 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4fdc6744-xx4wj" event={"ID":"e009c8bd-2d71-405b-a166-53cf1451c8f0","Type":"ContainerStarted","Data":"a7d341d42ce35aa952aa4016b0b719c8cfd729ff20baff9e76d90f4db43c01d4"} Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.220933 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.256879 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c4fdc6744-xx4wj" podStartSLOduration=2.256862169 podStartE2EDuration="2.256862169s" podCreationTimestamp="2026-01-29 08:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:48.243478249 +0000 UTC m=+1268.743066201" watchObservedRunningTime="2026-01-29 08:59:48.256862169 +0000 UTC m=+1268.756450121" Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.437829 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-558dccb5cc-bkkrn" Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.510424 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55cd8fc46d-6fxwk"] Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.510697 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55cd8fc46d-6fxwk" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-api" containerID="cri-o://6c08c56a28d1cd5d115e430600a8f8a7cd7ef18bcd823b24c8c04ad9c67e6636" gracePeriod=30 Jan 29 08:59:48 crc kubenswrapper[5031]: I0129 08:59:48.511205 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55cd8fc46d-6fxwk" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-httpd" containerID="cri-o://54c71041e7c4927e77d0c3367148761d20f97f56f5bae9f1561ec53a539fb273" gracePeriod=30 Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.246303 5031 generic.go:334] "Generic (PLEG): container finished" podID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerID="2870feb6f68ec7fd46746b113bb8d2857881d1c5348371a8408a371d7445dc42" exitCode=0 Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.246471 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc85969c8-jq8bn" event={"ID":"b00e3d5c-e648-43d7-a014-815c0dcff26f","Type":"ContainerDied","Data":"2870feb6f68ec7fd46746b113bb8d2857881d1c5348371a8408a371d7445dc42"} Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.247246 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc85969c8-jq8bn" event={"ID":"b00e3d5c-e648-43d7-a014-815c0dcff26f","Type":"ContainerDied","Data":"5fe688519704751ebfd8bdb88fe198b008882f041389f73ab102f4674d48cbc9"} Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.247302 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe688519704751ebfd8bdb88fe198b008882f041389f73ab102f4674d48cbc9" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.254750 5031 generic.go:334] "Generic (PLEG): container finished" podID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerID="54c71041e7c4927e77d0c3367148761d20f97f56f5bae9f1561ec53a539fb273" exitCode=0 Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.254821 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cd8fc46d-6fxwk" event={"ID":"cf647f09-336d-4f0a-9cf7-415ecf4a9d26","Type":"ContainerDied","Data":"54c71041e7c4927e77d0c3367148761d20f97f56f5bae9f1561ec53a539fb273"} Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.257434 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c053401-8bfa-4629-926e-e97653fbb397","Type":"ContainerStarted","Data":"b355a1ad2f2dca2dad8dc58f84c45ccd70d8d44dcd6a7fc4bebefc0d29784bec"} Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.257655 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.261522 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.276987 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.276965205 podStartE2EDuration="4.276965205s" podCreationTimestamp="2026-01-29 08:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:49.274799027 +0000 UTC m=+1269.774386979" watchObservedRunningTime="2026-01-29 08:59:49.276965205 +0000 UTC m=+1269.776553157" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.413626 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzkdj\" (UniqueName: \"kubernetes.io/projected/b00e3d5c-e648-43d7-a014-815c0dcff26f-kube-api-access-rzkdj\") pod \"b00e3d5c-e648-43d7-a014-815c0dcff26f\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.413699 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-combined-ca-bundle\") pod \"b00e3d5c-e648-43d7-a014-815c0dcff26f\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.413965 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b00e3d5c-e648-43d7-a014-815c0dcff26f-logs\") pod \"b00e3d5c-e648-43d7-a014-815c0dcff26f\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.414142 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data-custom\") pod \"b00e3d5c-e648-43d7-a014-815c0dcff26f\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.414172 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data\") pod \"b00e3d5c-e648-43d7-a014-815c0dcff26f\" (UID: \"b00e3d5c-e648-43d7-a014-815c0dcff26f\") " Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.414685 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b00e3d5c-e648-43d7-a014-815c0dcff26f-logs" (OuterVolumeSpecName: "logs") pod "b00e3d5c-e648-43d7-a014-815c0dcff26f" (UID: "b00e3d5c-e648-43d7-a014-815c0dcff26f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.415331 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b00e3d5c-e648-43d7-a014-815c0dcff26f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.421160 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b00e3d5c-e648-43d7-a014-815c0dcff26f" (UID: "b00e3d5c-e648-43d7-a014-815c0dcff26f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.425178 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00e3d5c-e648-43d7-a014-815c0dcff26f-kube-api-access-rzkdj" (OuterVolumeSpecName: "kube-api-access-rzkdj") pod "b00e3d5c-e648-43d7-a014-815c0dcff26f" (UID: "b00e3d5c-e648-43d7-a014-815c0dcff26f"). InnerVolumeSpecName "kube-api-access-rzkdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.446823 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b00e3d5c-e648-43d7-a014-815c0dcff26f" (UID: "b00e3d5c-e648-43d7-a014-815c0dcff26f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.480259 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data" (OuterVolumeSpecName: "config-data") pod "b00e3d5c-e648-43d7-a014-815c0dcff26f" (UID: "b00e3d5c-e648-43d7-a014-815c0dcff26f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.516819 5031 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.517115 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.517125 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzkdj\" (UniqueName: \"kubernetes.io/projected/b00e3d5c-e648-43d7-a014-815c0dcff26f-kube-api-access-rzkdj\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.517136 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00e3d5c-e648-43d7-a014-815c0dcff26f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.611484 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.633548 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 08:59:49 crc kubenswrapper[5031]: E0129 08:59:49.633977 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.633997 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api" Jan 29 08:59:49 crc kubenswrapper[5031]: E0129 08:59:49.634022 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api-log" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.634030 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api-log" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.634203 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.634221 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api-log" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.635037 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.637189 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-plnbr" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.637580 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.650907 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.670995 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.698106 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-tbgzf"] Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.698329 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" podUID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerName="dnsmasq-dns" containerID="cri-o://8d03d7d9c5ca1645e0ba3a9245a76adc4f0c3236cfeb9f41264acc847b591eef" gracePeriod=10 Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.723143 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qkx\" (UniqueName: \"kubernetes.io/projected/7cd1d91b-5c5a-425c-bb48-ed97702719d6-kube-api-access-q9qkx\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.723309 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7cd1d91b-5c5a-425c-bb48-ed97702719d6-openstack-config-secret\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.723352 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd1d91b-5c5a-425c-bb48-ed97702719d6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.723400 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7cd1d91b-5c5a-425c-bb48-ed97702719d6-openstack-config\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.826182 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qkx\" (UniqueName: \"kubernetes.io/projected/7cd1d91b-5c5a-425c-bb48-ed97702719d6-kube-api-access-q9qkx\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.826533 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7cd1d91b-5c5a-425c-bb48-ed97702719d6-openstack-config-secret\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.826604 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd1d91b-5c5a-425c-bb48-ed97702719d6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.826632 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7cd1d91b-5c5a-425c-bb48-ed97702719d6-openstack-config\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.836255 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7cd1d91b-5c5a-425c-bb48-ed97702719d6-openstack-config\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.836941 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7cd1d91b-5c5a-425c-bb48-ed97702719d6-openstack-config-secret\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.841114 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qkx\" (UniqueName: \"kubernetes.io/projected/7cd1d91b-5c5a-425c-bb48-ed97702719d6-kube-api-access-q9qkx\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.853193 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd1d91b-5c5a-425c-bb48-ed97702719d6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7cd1d91b-5c5a-425c-bb48-ed97702719d6\") " pod="openstack/openstackclient" Jan 29 08:59:49 crc kubenswrapper[5031]: I0129 08:59:49.962103 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.026947 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.091902 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.274657 5031 generic.go:334] "Generic (PLEG): container finished" podID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerID="8d03d7d9c5ca1645e0ba3a9245a76adc4f0c3236cfeb9f41264acc847b591eef" exitCode=0 Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.276057 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" event={"ID":"4be37031-a33c-4ebf-977e-a463b2fe3762","Type":"ContainerDied","Data":"8d03d7d9c5ca1645e0ba3a9245a76adc4f0c3236cfeb9f41264acc847b591eef"} Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.276094 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.276260 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="cinder-scheduler" containerID="cri-o://5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2" gracePeriod=30 Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.276628 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc85969c8-jq8bn" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.277170 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="probe" containerID="cri-o://ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77" gracePeriod=30 Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.322321 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.342398 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cc85969c8-jq8bn"] Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.375485 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7cc85969c8-jq8bn"] Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.471608 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-sb\") pod \"4be37031-a33c-4ebf-977e-a463b2fe3762\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.471765 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-nb\") pod \"4be37031-a33c-4ebf-977e-a463b2fe3762\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.471800 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-config\") pod \"4be37031-a33c-4ebf-977e-a463b2fe3762\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.471873 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b54n\" (UniqueName: \"kubernetes.io/projected/4be37031-a33c-4ebf-977e-a463b2fe3762-kube-api-access-5b54n\") pod \"4be37031-a33c-4ebf-977e-a463b2fe3762\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.471916 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-dns-svc\") pod \"4be37031-a33c-4ebf-977e-a463b2fe3762\" (UID: \"4be37031-a33c-4ebf-977e-a463b2fe3762\") " Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.481856 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be37031-a33c-4ebf-977e-a463b2fe3762-kube-api-access-5b54n" (OuterVolumeSpecName: "kube-api-access-5b54n") pod "4be37031-a33c-4ebf-977e-a463b2fe3762" (UID: "4be37031-a33c-4ebf-977e-a463b2fe3762"). InnerVolumeSpecName "kube-api-access-5b54n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.561166 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-config" (OuterVolumeSpecName: "config") pod "4be37031-a33c-4ebf-977e-a463b2fe3762" (UID: "4be37031-a33c-4ebf-977e-a463b2fe3762"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.569706 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4be37031-a33c-4ebf-977e-a463b2fe3762" (UID: "4be37031-a33c-4ebf-977e-a463b2fe3762"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.588805 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.588845 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b54n\" (UniqueName: \"kubernetes.io/projected/4be37031-a33c-4ebf-977e-a463b2fe3762-kube-api-access-5b54n\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.588861 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.589458 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4be37031-a33c-4ebf-977e-a463b2fe3762" (UID: "4be37031-a33c-4ebf-977e-a463b2fe3762"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.622027 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4be37031-a33c-4ebf-977e-a463b2fe3762" (UID: "4be37031-a33c-4ebf-977e-a463b2fe3762"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.697473 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.697500 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4be37031-a33c-4ebf-977e-a463b2fe3762-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:50 crc kubenswrapper[5031]: I0129 08:59:50.710467 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:59:51 crc kubenswrapper[5031]: I0129 08:59:51.299986 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7cd1d91b-5c5a-425c-bb48-ed97702719d6","Type":"ContainerStarted","Data":"a110de482a1529316e3dc884be6916a229655478bcb50df385a862da779ec780"} Jan 29 08:59:51 crc kubenswrapper[5031]: I0129 08:59:51.326878 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" event={"ID":"4be37031-a33c-4ebf-977e-a463b2fe3762","Type":"ContainerDied","Data":"d08c255e55bfcf22b7e860035f747abaae04964ffa8af3f19d1eec3847b7b0d9"} Jan 29 08:59:51 crc kubenswrapper[5031]: I0129 08:59:51.326959 5031 scope.go:117] "RemoveContainer" containerID="8d03d7d9c5ca1645e0ba3a9245a76adc4f0c3236cfeb9f41264acc847b591eef" Jan 29 08:59:51 crc kubenswrapper[5031]: I0129 08:59:51.327094 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-tbgzf" Jan 29 08:59:51 crc kubenswrapper[5031]: I0129 08:59:51.384563 5031 scope.go:117] "RemoveContainer" containerID="771c84e67ea19821caf96253b217f468821f5b4358ccf3e19ebe76dce3f315ae" Jan 29 08:59:51 crc kubenswrapper[5031]: I0129 08:59:51.388575 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-tbgzf"] Jan 29 08:59:51 crc kubenswrapper[5031]: I0129 08:59:51.397876 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-tbgzf"] Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.299969 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be37031-a33c-4ebf-977e-a463b2fe3762" path="/var/lib/kubelet/pods/4be37031-a33c-4ebf-977e-a463b2fe3762/volumes" Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.300824 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" path="/var/lib/kubelet/pods/b00e3d5c-e648-43d7-a014-815c0dcff26f/volumes" Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.339816 5031 generic.go:334] "Generic (PLEG): container finished" podID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerID="ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77" exitCode=0 Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.339852 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34de6294-aaa2-4fe1-9179-40ee89555f2b","Type":"ContainerDied","Data":"ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77"} Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.865531 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.948588 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data\") pod \"34de6294-aaa2-4fe1-9179-40ee89555f2b\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.948644 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-scripts\") pod \"34de6294-aaa2-4fe1-9179-40ee89555f2b\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.948681 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-combined-ca-bundle\") pod \"34de6294-aaa2-4fe1-9179-40ee89555f2b\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.948740 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxbqf\" (UniqueName: \"kubernetes.io/projected/34de6294-aaa2-4fe1-9179-40ee89555f2b-kube-api-access-sxbqf\") pod \"34de6294-aaa2-4fe1-9179-40ee89555f2b\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.948896 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34de6294-aaa2-4fe1-9179-40ee89555f2b-etc-machine-id\") pod \"34de6294-aaa2-4fe1-9179-40ee89555f2b\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.948931 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data-custom\") pod \"34de6294-aaa2-4fe1-9179-40ee89555f2b\" (UID: \"34de6294-aaa2-4fe1-9179-40ee89555f2b\") " Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.953719 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34de6294-aaa2-4fe1-9179-40ee89555f2b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "34de6294-aaa2-4fe1-9179-40ee89555f2b" (UID: "34de6294-aaa2-4fe1-9179-40ee89555f2b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.956085 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "34de6294-aaa2-4fe1-9179-40ee89555f2b" (UID: "34de6294-aaa2-4fe1-9179-40ee89555f2b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.956450 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34de6294-aaa2-4fe1-9179-40ee89555f2b-kube-api-access-sxbqf" (OuterVolumeSpecName: "kube-api-access-sxbqf") pod "34de6294-aaa2-4fe1-9179-40ee89555f2b" (UID: "34de6294-aaa2-4fe1-9179-40ee89555f2b"). InnerVolumeSpecName "kube-api-access-sxbqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:52 crc kubenswrapper[5031]: I0129 08:59:52.963621 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-scripts" (OuterVolumeSpecName: "scripts") pod "34de6294-aaa2-4fe1-9179-40ee89555f2b" (UID: "34de6294-aaa2-4fe1-9179-40ee89555f2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.016639 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34de6294-aaa2-4fe1-9179-40ee89555f2b" (UID: "34de6294-aaa2-4fe1-9179-40ee89555f2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.051269 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.051296 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.051320 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxbqf\" (UniqueName: \"kubernetes.io/projected/34de6294-aaa2-4fe1-9179-40ee89555f2b-kube-api-access-sxbqf\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.051329 5031 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34de6294-aaa2-4fe1-9179-40ee89555f2b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.051337 5031 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.089856 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data" (OuterVolumeSpecName: "config-data") pod "34de6294-aaa2-4fe1-9179-40ee89555f2b" (UID: "34de6294-aaa2-4fe1-9179-40ee89555f2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.153331 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de6294-aaa2-4fe1-9179-40ee89555f2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.352314 5031 generic.go:334] "Generic (PLEG): container finished" podID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerID="5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2" exitCode=0 Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.352378 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34de6294-aaa2-4fe1-9179-40ee89555f2b","Type":"ContainerDied","Data":"5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2"} Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.352396 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.352420 5031 scope.go:117] "RemoveContainer" containerID="ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.352408 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34de6294-aaa2-4fe1-9179-40ee89555f2b","Type":"ContainerDied","Data":"4607f85f8753e087231374a735afd11b7445864fbabd2402802e719783cb8ba1"} Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.389528 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.399389 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.408555 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:53 crc kubenswrapper[5031]: E0129 08:59:53.409270 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="cinder-scheduler" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.409343 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="cinder-scheduler" Jan 29 08:59:53 crc kubenswrapper[5031]: E0129 08:59:53.409433 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerName="dnsmasq-dns" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.409494 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerName="dnsmasq-dns" Jan 29 08:59:53 crc kubenswrapper[5031]: E0129 08:59:53.409568 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerName="init" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.409625 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerName="init" Jan 29 08:59:53 crc kubenswrapper[5031]: E0129 08:59:53.409692 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="probe" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.409741 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="probe" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.409958 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be37031-a33c-4ebf-977e-a463b2fe3762" containerName="dnsmasq-dns" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.410023 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="cinder-scheduler" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.410075 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" containerName="probe" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.411030 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.419186 5031 scope.go:117] "RemoveContainer" containerID="5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.419581 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.441758 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.494125 5031 scope.go:117] "RemoveContainer" containerID="ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77" Jan 29 08:59:53 crc kubenswrapper[5031]: E0129 08:59:53.494611 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77\": container with ID starting with ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77 not found: ID does not exist" containerID="ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.494641 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77"} err="failed to get container status \"ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77\": rpc error: code = NotFound desc = could not find container \"ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77\": container with ID starting with ef42ad3c6bbe0c417a34c8b6654a0c615de703cbd4adf0c70f7058106b923c77 not found: ID does not exist" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.494663 5031 scope.go:117] "RemoveContainer" containerID="5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2" Jan 29 08:59:53 crc kubenswrapper[5031]: E0129 08:59:53.495060 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2\": container with ID starting with 5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2 not found: ID does not exist" containerID="5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.495078 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2"} err="failed to get container status \"5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2\": rpc error: code = NotFound desc = could not find container \"5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2\": container with ID starting with 5b98bb29bfd2c8a6ac0b3d6bf3b60f5409098c1ec50e202b394b63dcd8ab32f2 not found: ID does not exist" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.560944 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.561298 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.561400 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ce55669-5a60-4cbb-8994-441b7c5d0c75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.561429 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.561477 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hjh\" (UniqueName: \"kubernetes.io/projected/2ce55669-5a60-4cbb-8994-441b7c5d0c75-kube-api-access-78hjh\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.561513 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.663063 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ce55669-5a60-4cbb-8994-441b7c5d0c75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.663120 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.663156 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78hjh\" (UniqueName: \"kubernetes.io/projected/2ce55669-5a60-4cbb-8994-441b7c5d0c75-kube-api-access-78hjh\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.663180 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.663204 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ce55669-5a60-4cbb-8994-441b7c5d0c75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.663252 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.663500 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.667539 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.667999 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.668495 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.672721 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce55669-5a60-4cbb-8994-441b7c5d0c75-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.686097 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78hjh\" (UniqueName: \"kubernetes.io/projected/2ce55669-5a60-4cbb-8994-441b7c5d0c75-kube-api-access-78hjh\") pod \"cinder-scheduler-0\" (UID: \"2ce55669-5a60-4cbb-8994-441b7c5d0c75\") " pod="openstack/cinder-scheduler-0" Jan 29 08:59:53 crc kubenswrapper[5031]: I0129 08:59:53.779788 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:59:54 crc kubenswrapper[5031]: I0129 08:59:54.243637 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc85969c8-jq8bn" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:59:54 crc kubenswrapper[5031]: I0129 08:59:54.246052 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc85969c8-jq8bn" podUID="b00e3d5c-e648-43d7-a014-815c0dcff26f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:59:54 crc kubenswrapper[5031]: I0129 08:59:54.295548 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34de6294-aaa2-4fe1-9179-40ee89555f2b" path="/var/lib/kubelet/pods/34de6294-aaa2-4fe1-9179-40ee89555f2b/volumes" Jan 29 08:59:54 crc kubenswrapper[5031]: I0129 08:59:54.406852 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:59:54 crc kubenswrapper[5031]: W0129 08:59:54.431735 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce55669_5a60_4cbb_8994_441b7c5d0c75.slice/crio-5f75f6789c9d5c489f9b0d6cbc5544099ec87baa721901c06b7d5e55f4b35a6e WatchSource:0}: Error finding container 5f75f6789c9d5c489f9b0d6cbc5544099ec87baa721901c06b7d5e55f4b35a6e: Status 404 returned error can't find the container with id 5f75f6789c9d5c489f9b0d6cbc5544099ec87baa721901c06b7d5e55f4b35a6e Jan 29 08:59:55 crc kubenswrapper[5031]: I0129 08:59:55.377016 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ce55669-5a60-4cbb-8994-441b7c5d0c75","Type":"ContainerStarted","Data":"413cf7d3f9f4fa74d7338f4fb15fafb8a4576b3d6cccbb514ba82120544bb82e"} Jan 29 08:59:55 crc kubenswrapper[5031]: I0129 08:59:55.377557 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ce55669-5a60-4cbb-8994-441b7c5d0c75","Type":"ContainerStarted","Data":"5f75f6789c9d5c489f9b0d6cbc5544099ec87baa721901c06b7d5e55f4b35a6e"} Jan 29 08:59:56 crc kubenswrapper[5031]: I0129 08:59:56.392287 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ce55669-5a60-4cbb-8994-441b7c5d0c75","Type":"ContainerStarted","Data":"a046d46bc6cb9da9f9f01af0c93e13e1d5b4a4b64ac8d258530d41cfd14209d6"} Jan 29 08:59:56 crc kubenswrapper[5031]: I0129 08:59:56.416485 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.416466995 podStartE2EDuration="3.416466995s" podCreationTimestamp="2026-01-29 08:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:56.416429944 +0000 UTC m=+1276.916017916" watchObservedRunningTime="2026-01-29 08:59:56.416466995 +0000 UTC m=+1276.916054947" Jan 29 08:59:58 crc kubenswrapper[5031]: I0129 08:59:58.191393 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 08:59:58 crc kubenswrapper[5031]: I0129 08:59:58.470533 5031 generic.go:334] "Generic (PLEG): container finished" podID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerID="6c08c56a28d1cd5d115e430600a8f8a7cd7ef18bcd823b24c8c04ad9c67e6636" exitCode=0 Jan 29 08:59:58 crc kubenswrapper[5031]: I0129 08:59:58.470902 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cd8fc46d-6fxwk" event={"ID":"cf647f09-336d-4f0a-9cf7-415ecf4a9d26","Type":"ContainerDied","Data":"6c08c56a28d1cd5d115e430600a8f8a7cd7ef18bcd823b24c8c04ad9c67e6636"} Jan 29 08:59:58 crc kubenswrapper[5031]: I0129 08:59:58.781809 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.137982 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg"] Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.139142 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.142218 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.142540 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.150206 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg"] Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.201551 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/577548b3-0ae4-42be-b7bf-a8a79788186e-config-volume\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.201665 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/577548b3-0ae4-42be-b7bf-a8a79788186e-secret-volume\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.201711 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fklh\" (UniqueName: \"kubernetes.io/projected/577548b3-0ae4-42be-b7bf-a8a79788186e-kube-api-access-4fklh\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.304267 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/577548b3-0ae4-42be-b7bf-a8a79788186e-secret-volume\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.304393 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fklh\" (UniqueName: \"kubernetes.io/projected/577548b3-0ae4-42be-b7bf-a8a79788186e-kube-api-access-4fklh\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.304686 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/577548b3-0ae4-42be-b7bf-a8a79788186e-config-volume\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.306932 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/577548b3-0ae4-42be-b7bf-a8a79788186e-config-volume\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.343309 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/577548b3-0ae4-42be-b7bf-a8a79788186e-secret-volume\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.368964 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fklh\" (UniqueName: \"kubernetes.io/projected/577548b3-0ae4-42be-b7bf-a8a79788186e-kube-api-access-4fklh\") pod \"collect-profiles-29494620-p8mlg\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:00 crc kubenswrapper[5031]: I0129 09:00:00.473196 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:02 crc kubenswrapper[5031]: I0129 09:00:02.517507 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:02 crc kubenswrapper[5031]: I0129 09:00:02.518221 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-central-agent" containerID="cri-o://d0acc8f922ce3e6573c47a4e9d4f8fbb52b70a5f12ecfe9617eaaf0031c8b56d" gracePeriod=30 Jan 29 09:00:02 crc kubenswrapper[5031]: I0129 09:00:02.519134 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="proxy-httpd" containerID="cri-o://993d1d49a7bdbe162faa239bb0d618dd312efe58b701eec85db6ae4348afba00" gracePeriod=30 Jan 29 09:00:02 crc kubenswrapper[5031]: I0129 09:00:02.519223 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="sg-core" containerID="cri-o://0e38b14a2b6d58ca5e46e29ca7b95ec0492744ff85306bc2b331e52ff1b4dd47" gracePeriod=30 Jan 29 09:00:02 crc kubenswrapper[5031]: I0129 09:00:02.519276 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-notification-agent" containerID="cri-o://ef5339f02f08c6ec9fdd23f814874e91bb3839224903f0bdd175ba3e7e2d5190" gracePeriod=30 Jan 29 09:00:02 crc kubenswrapper[5031]: I0129 09:00:02.537069 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:00:03 crc kubenswrapper[5031]: I0129 09:00:03.537153 5031 generic.go:334] "Generic (PLEG): container finished" podID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerID="993d1d49a7bdbe162faa239bb0d618dd312efe58b701eec85db6ae4348afba00" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[5031]: I0129 09:00:03.537194 5031 generic.go:334] "Generic (PLEG): container finished" podID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerID="0e38b14a2b6d58ca5e46e29ca7b95ec0492744ff85306bc2b331e52ff1b4dd47" exitCode=2 Jan 29 09:00:03 crc kubenswrapper[5031]: I0129 09:00:03.537252 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerDied","Data":"993d1d49a7bdbe162faa239bb0d618dd312efe58b701eec85db6ae4348afba00"} Jan 29 09:00:03 crc kubenswrapper[5031]: I0129 09:00:03.537292 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerDied","Data":"0e38b14a2b6d58ca5e46e29ca7b95ec0492744ff85306bc2b331e52ff1b4dd47"} Jan 29 09:00:04 crc kubenswrapper[5031]: I0129 09:00:04.090352 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 09:00:04 crc kubenswrapper[5031]: I0129 09:00:04.376621 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.149:3000/\": dial tcp 10.217.0.149:3000: connect: connection refused" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.566623 5031 generic.go:334] "Generic (PLEG): container finished" podID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerID="d0acc8f922ce3e6573c47a4e9d4f8fbb52b70a5f12ecfe9617eaaf0031c8b56d" exitCode=0 Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.566882 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerDied","Data":"d0acc8f922ce3e6573c47a4e9d4f8fbb52b70a5f12ecfe9617eaaf0031c8b56d"} Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.693496 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.838001 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg"] Jan 29 09:00:05 crc kubenswrapper[5031]: W0129 09:00:05.844501 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod577548b3_0ae4_42be_b7bf_a8a79788186e.slice/crio-b39f22b8b139277e7d8fe0efcf3397ec628869ae67f0d3bc3943b5801ae93138 WatchSource:0}: Error finding container b39f22b8b139277e7d8fe0efcf3397ec628869ae67f0d3bc3943b5801ae93138: Status 404 returned error can't find the container with id b39f22b8b139277e7d8fe0efcf3397ec628869ae67f0d3bc3943b5801ae93138 Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.856101 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-config\") pod \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.856146 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-ovndb-tls-certs\") pod \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.856288 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-httpd-config\") pod \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.856315 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z5rq\" (UniqueName: \"kubernetes.io/projected/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-kube-api-access-6z5rq\") pod \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.856458 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-combined-ca-bundle\") pod \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\" (UID: \"cf647f09-336d-4f0a-9cf7-415ecf4a9d26\") " Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.864543 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cf647f09-336d-4f0a-9cf7-415ecf4a9d26" (UID: "cf647f09-336d-4f0a-9cf7-415ecf4a9d26"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.866550 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-kube-api-access-6z5rq" (OuterVolumeSpecName: "kube-api-access-6z5rq") pod "cf647f09-336d-4f0a-9cf7-415ecf4a9d26" (UID: "cf647f09-336d-4f0a-9cf7-415ecf4a9d26"). InnerVolumeSpecName "kube-api-access-6z5rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.922541 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf647f09-336d-4f0a-9cf7-415ecf4a9d26" (UID: "cf647f09-336d-4f0a-9cf7-415ecf4a9d26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.925635 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-config" (OuterVolumeSpecName: "config") pod "cf647f09-336d-4f0a-9cf7-415ecf4a9d26" (UID: "cf647f09-336d-4f0a-9cf7-415ecf4a9d26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.959862 5031 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.960018 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z5rq\" (UniqueName: \"kubernetes.io/projected/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-kube-api-access-6z5rq\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.960108 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.960192 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[5031]: I0129 09:00:05.966639 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cf647f09-336d-4f0a-9cf7-415ecf4a9d26" (UID: "cf647f09-336d-4f0a-9cf7-415ecf4a9d26"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.061851 5031 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf647f09-336d-4f0a-9cf7-415ecf4a9d26-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.588405 5031 generic.go:334] "Generic (PLEG): container finished" podID="577548b3-0ae4-42be-b7bf-a8a79788186e" containerID="44ce92c733d26f6b44ed27cbae097d03f6bd51bf66637bd2448bdeaecda730a0" exitCode=0 Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.588592 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" event={"ID":"577548b3-0ae4-42be-b7bf-a8a79788186e","Type":"ContainerDied","Data":"44ce92c733d26f6b44ed27cbae097d03f6bd51bf66637bd2448bdeaecda730a0"} Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.588630 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" event={"ID":"577548b3-0ae4-42be-b7bf-a8a79788186e","Type":"ContainerStarted","Data":"b39f22b8b139277e7d8fe0efcf3397ec628869ae67f0d3bc3943b5801ae93138"} Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.596242 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55cd8fc46d-6fxwk" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.597338 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cd8fc46d-6fxwk" event={"ID":"cf647f09-336d-4f0a-9cf7-415ecf4a9d26","Type":"ContainerDied","Data":"81c185a93d47ef8b90c1987d97c1f297994fabd3c27037841313ca68d58b017d"} Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.597495 5031 scope.go:117] "RemoveContainer" containerID="54c71041e7c4927e77d0c3367148761d20f97f56f5bae9f1561ec53a539fb273" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.626927 5031 generic.go:334] "Generic (PLEG): container finished" podID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerID="ef5339f02f08c6ec9fdd23f814874e91bb3839224903f0bdd175ba3e7e2d5190" exitCode=0 Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.627178 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerDied","Data":"ef5339f02f08c6ec9fdd23f814874e91bb3839224903f0bdd175ba3e7e2d5190"} Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.631262 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7cd1d91b-5c5a-425c-bb48-ed97702719d6","Type":"ContainerStarted","Data":"dc843fb4d93c62b59b4b9ba3aa6eb9a7517af08902cadf233b062fc46c27dcc5"} Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.641888 5031 scope.go:117] "RemoveContainer" containerID="6c08c56a28d1cd5d115e430600a8f8a7cd7ef18bcd823b24c8c04ad9c67e6636" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.694713 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55cd8fc46d-6fxwk"] Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.703796 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-55cd8fc46d-6fxwk"] Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.704213 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.155673991 podStartE2EDuration="17.704194524s" podCreationTimestamp="2026-01-29 08:59:49 +0000 UTC" firstStartedPulling="2026-01-29 08:59:50.696022618 +0000 UTC m=+1271.195610570" lastFinishedPulling="2026-01-29 09:00:05.244543151 +0000 UTC m=+1285.744131103" observedRunningTime="2026-01-29 09:00:06.673954942 +0000 UTC m=+1287.173542914" watchObservedRunningTime="2026-01-29 09:00:06.704194524 +0000 UTC m=+1287.203782476" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.789344 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.850951 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.851516 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6c528f35-8b42-42a9-9e47-9aee6ba624f5" containerName="kube-state-metrics" containerID="cri-o://8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03" gracePeriod=30 Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.981418 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-sg-core-conf-yaml\") pod \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.981500 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-scripts\") pod \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.981551 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-run-httpd\") pod \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.981594 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-config-data\") pod \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.981645 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq7ld\" (UniqueName: \"kubernetes.io/projected/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-kube-api-access-tq7ld\") pod \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.981670 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-log-httpd\") pod \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.981751 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-combined-ca-bundle\") pod \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\" (UID: \"504b5f7b-fb13-436e-9e5a-b66a5bb203b7\") " Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.983215 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "504b5f7b-fb13-436e-9e5a-b66a5bb203b7" (UID: "504b5f7b-fb13-436e-9e5a-b66a5bb203b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.983628 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "504b5f7b-fb13-436e-9e5a-b66a5bb203b7" (UID: "504b5f7b-fb13-436e-9e5a-b66a5bb203b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.991813 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-scripts" (OuterVolumeSpecName: "scripts") pod "504b5f7b-fb13-436e-9e5a-b66a5bb203b7" (UID: "504b5f7b-fb13-436e-9e5a-b66a5bb203b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:06 crc kubenswrapper[5031]: I0129 09:00:06.998006 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-kube-api-access-tq7ld" (OuterVolumeSpecName: "kube-api-access-tq7ld") pod "504b5f7b-fb13-436e-9e5a-b66a5bb203b7" (UID: "504b5f7b-fb13-436e-9e5a-b66a5bb203b7"). InnerVolumeSpecName "kube-api-access-tq7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.022842 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "504b5f7b-fb13-436e-9e5a-b66a5bb203b7" (UID: "504b5f7b-fb13-436e-9e5a-b66a5bb203b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.084262 5031 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.084814 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.084830 5031 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.084842 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq7ld\" (UniqueName: \"kubernetes.io/projected/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-kube-api-access-tq7ld\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.084859 5031 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.091293 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "504b5f7b-fb13-436e-9e5a-b66a5bb203b7" (UID: "504b5f7b-fb13-436e-9e5a-b66a5bb203b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.121203 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-config-data" (OuterVolumeSpecName: "config-data") pod "504b5f7b-fb13-436e-9e5a-b66a5bb203b7" (UID: "504b5f7b-fb13-436e-9e5a-b66a5bb203b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.187796 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.187830 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504b5f7b-fb13-436e-9e5a-b66a5bb203b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.396457 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.603570 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5r6v\" (UniqueName: \"kubernetes.io/projected/6c528f35-8b42-42a9-9e47-9aee6ba624f5-kube-api-access-w5r6v\") pod \"6c528f35-8b42-42a9-9e47-9aee6ba624f5\" (UID: \"6c528f35-8b42-42a9-9e47-9aee6ba624f5\") " Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.612625 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c528f35-8b42-42a9-9e47-9aee6ba624f5-kube-api-access-w5r6v" (OuterVolumeSpecName: "kube-api-access-w5r6v") pod "6c528f35-8b42-42a9-9e47-9aee6ba624f5" (UID: "6c528f35-8b42-42a9-9e47-9aee6ba624f5"). InnerVolumeSpecName "kube-api-access-w5r6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.647172 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"504b5f7b-fb13-436e-9e5a-b66a5bb203b7","Type":"ContainerDied","Data":"87d3261c8975d4641cc5f5bf6ee2f291333c7dc96618bcf8fa5e93d6edde9427"} Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.647216 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.647254 5031 scope.go:117] "RemoveContainer" containerID="993d1d49a7bdbe162faa239bb0d618dd312efe58b701eec85db6ae4348afba00" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.654805 5031 generic.go:334] "Generic (PLEG): container finished" podID="6c528f35-8b42-42a9-9e47-9aee6ba624f5" containerID="8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03" exitCode=2 Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.654889 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c528f35-8b42-42a9-9e47-9aee6ba624f5","Type":"ContainerDied","Data":"8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03"} Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.654918 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c528f35-8b42-42a9-9e47-9aee6ba624f5","Type":"ContainerDied","Data":"76a43af5c6e673f052f461f5a584d5bbb4a31b233f9df5e2ac549dc8755c6f3f"} Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.654991 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.706217 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5r6v\" (UniqueName: \"kubernetes.io/projected/6c528f35-8b42-42a9-9e47-9aee6ba624f5-kube-api-access-w5r6v\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.708831 5031 scope.go:117] "RemoveContainer" containerID="0e38b14a2b6d58ca5e46e29ca7b95ec0492744ff85306bc2b331e52ff1b4dd47" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.711107 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.748907 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.758814 5031 scope.go:117] "RemoveContainer" containerID="ef5339f02f08c6ec9fdd23f814874e91bb3839224903f0bdd175ba3e7e2d5190" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.770750 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.802805 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.807658 5031 scope.go:117] "RemoveContainer" containerID="d0acc8f922ce3e6573c47a4e9d4f8fbb52b70a5f12ecfe9617eaaf0031c8b56d" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.817644 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.818231 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="proxy-httpd" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818250 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="proxy-httpd" Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.818269 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c528f35-8b42-42a9-9e47-9aee6ba624f5" containerName="kube-state-metrics" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818278 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c528f35-8b42-42a9-9e47-9aee6ba624f5" containerName="kube-state-metrics" Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.818305 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-httpd" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818313 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-httpd" Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.818322 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-notification-agent" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818329 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-notification-agent" Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.818338 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-central-agent" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818344 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-central-agent" Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.818436 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="sg-core" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818444 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="sg-core" Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.818464 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-api" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818471 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-api" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818704 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-httpd" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818746 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-notification-agent" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818757 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="ceilometer-central-agent" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818769 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="sg-core" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818776 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" containerName="neutron-api" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818783 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c528f35-8b42-42a9-9e47-9aee6ba624f5" containerName="kube-state-metrics" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.818792 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" containerName="proxy-httpd" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.820754 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.823045 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vbmrc" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.823317 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.824751 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.830624 5031 scope.go:117] "RemoveContainer" containerID="8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.846063 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.847549 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.859343 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.863627 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.863863 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.865592 5031 scope.go:117] "RemoveContainer" containerID="8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03" Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.866171 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03\": container with ID starting with 8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03 not found: ID does not exist" containerID="8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.866223 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03"} err="failed to get container status \"8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03\": rpc error: code = NotFound desc = could not find container \"8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03\": container with ID starting with 8be1ea99436aec9cabc0c3ff0d484022182f6b8dbb7d8d9c545e64faf7cded03 not found: ID does not exist" Jan 29 09:00:07 crc kubenswrapper[5031]: I0129 09:00:07.878876 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:07 crc kubenswrapper[5031]: E0129 09:00:07.886038 5031 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c528f35_8b42_42a9_9e47_9aee6ba624f5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod504b5f7b_fb13_436e_9e5a_b66a5bb203b7.slice\": RecentStats: unable to find data in memory cache]" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015686 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-run-httpd\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015741 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c89zj\" (UniqueName: \"kubernetes.io/projected/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-api-access-c89zj\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015771 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015804 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-config-data\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015832 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015859 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-scripts\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015904 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-log-httpd\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.015988 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.016054 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.016076 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8q4\" (UniqueName: \"kubernetes.io/projected/36d96360-2b92-4cbf-8094-71193ef211c8-kube-api-access-rd8q4\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.016128 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.117895 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.117935 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8q4\" (UniqueName: \"kubernetes.io/projected/36d96360-2b92-4cbf-8094-71193ef211c8-kube-api-access-rd8q4\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.117986 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118030 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-run-httpd\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118060 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c89zj\" (UniqueName: \"kubernetes.io/projected/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-api-access-c89zj\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118078 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118103 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-config-data\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118124 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118150 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-scripts\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118199 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-log-httpd\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.118230 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.133000 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-log-httpd\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.133267 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-run-httpd\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.133609 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.133856 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.140029 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-config-data\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.142397 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-scripts\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.142558 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.148521 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c89zj\" (UniqueName: \"kubernetes.io/projected/c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d-kube-api-access-c89zj\") pod \"kube-state-metrics-0\" (UID: \"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d\") " pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.153453 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.160436 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.162894 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8q4\" (UniqueName: \"kubernetes.io/projected/36d96360-2b92-4cbf-8094-71193ef211c8-kube-api-access-rd8q4\") pod \"ceilometer-0\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.165046 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.192984 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.287227 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.307165 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504b5f7b-fb13-436e-9e5a-b66a5bb203b7" path="/var/lib/kubelet/pods/504b5f7b-fb13-436e-9e5a-b66a5bb203b7/volumes" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.308247 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c528f35-8b42-42a9-9e47-9aee6ba624f5" path="/var/lib/kubelet/pods/6c528f35-8b42-42a9-9e47-9aee6ba624f5/volumes" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.308912 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf647f09-336d-4f0a-9cf7-415ecf4a9d26" path="/var/lib/kubelet/pods/cf647f09-336d-4f0a-9cf7-415ecf4a9d26/volumes" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.423466 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/577548b3-0ae4-42be-b7bf-a8a79788186e-config-volume\") pod \"577548b3-0ae4-42be-b7bf-a8a79788186e\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.423679 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/577548b3-0ae4-42be-b7bf-a8a79788186e-secret-volume\") pod \"577548b3-0ae4-42be-b7bf-a8a79788186e\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.423739 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fklh\" (UniqueName: \"kubernetes.io/projected/577548b3-0ae4-42be-b7bf-a8a79788186e-kube-api-access-4fklh\") pod \"577548b3-0ae4-42be-b7bf-a8a79788186e\" (UID: \"577548b3-0ae4-42be-b7bf-a8a79788186e\") " Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.425945 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/577548b3-0ae4-42be-b7bf-a8a79788186e-config-volume" (OuterVolumeSpecName: "config-volume") pod "577548b3-0ae4-42be-b7bf-a8a79788186e" (UID: "577548b3-0ae4-42be-b7bf-a8a79788186e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.430289 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577548b3-0ae4-42be-b7bf-a8a79788186e-kube-api-access-4fklh" (OuterVolumeSpecName: "kube-api-access-4fklh") pod "577548b3-0ae4-42be-b7bf-a8a79788186e" (UID: "577548b3-0ae4-42be-b7bf-a8a79788186e"). InnerVolumeSpecName "kube-api-access-4fklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.430383 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/577548b3-0ae4-42be-b7bf-a8a79788186e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "577548b3-0ae4-42be-b7bf-a8a79788186e" (UID: "577548b3-0ae4-42be-b7bf-a8a79788186e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.526006 5031 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/577548b3-0ae4-42be-b7bf-a8a79788186e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.526319 5031 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/577548b3-0ae4-42be-b7bf-a8a79788186e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.526332 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fklh\" (UniqueName: \"kubernetes.io/projected/577548b3-0ae4-42be-b7bf-a8a79788186e-kube-api-access-4fklh\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.595510 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.676073 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" event={"ID":"577548b3-0ae4-42be-b7bf-a8a79788186e","Type":"ContainerDied","Data":"b39f22b8b139277e7d8fe0efcf3397ec628869ae67f0d3bc3943b5801ae93138"} Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.676099 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.676113 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39f22b8b139277e7d8fe0efcf3397ec628869ae67f0d3bc3943b5801ae93138" Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.678277 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:08 crc kubenswrapper[5031]: I0129 09:00:08.747442 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:00:09 crc kubenswrapper[5031]: I0129 09:00:09.694268 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerStarted","Data":"61587c1369dee48be7708319e4d0edd6b7ecf0430e6aa5d70ac799022822e829"} Jan 29 09:00:09 crc kubenswrapper[5031]: I0129 09:00:09.694610 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerStarted","Data":"13eab7ad78c2f58008df3c73a98cbfafb7d593ecd074626c2f06422671cfff3f"} Jan 29 09:00:09 crc kubenswrapper[5031]: I0129 09:00:09.698029 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d","Type":"ContainerStarted","Data":"fa24e3170d9fb5e5ad068cad01b87b9b4942c5dc76507a76e4ba21924ee6a131"} Jan 29 09:00:09 crc kubenswrapper[5031]: I0129 09:00:09.698076 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d","Type":"ContainerStarted","Data":"6238bb9a39dfe2371cbcbe4b3f96b91690b74b563fb1a0896c77e1003a8652c4"} Jan 29 09:00:09 crc kubenswrapper[5031]: I0129 09:00:09.698186 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 09:00:09 crc kubenswrapper[5031]: I0129 09:00:09.718276 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.239232336 podStartE2EDuration="2.718253231s" podCreationTimestamp="2026-01-29 09:00:07 +0000 UTC" firstStartedPulling="2026-01-29 09:00:08.753434229 +0000 UTC m=+1289.253022181" lastFinishedPulling="2026-01-29 09:00:09.232455124 +0000 UTC m=+1289.732043076" observedRunningTime="2026-01-29 09:00:09.712970839 +0000 UTC m=+1290.212558801" watchObservedRunningTime="2026-01-29 09:00:09.718253231 +0000 UTC m=+1290.217841183" Jan 29 09:00:10 crc kubenswrapper[5031]: I0129 09:00:10.710491 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerStarted","Data":"14e80de3bde3749f36b2888df4dec94cdbfc23b8ed2f225bfa5c22065d4f6942"} Jan 29 09:00:11 crc kubenswrapper[5031]: I0129 09:00:11.721247 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerStarted","Data":"cd1b33435f3d6b7bc16b79e46d81b3c6a12825e908e98c808070e88501cad27c"} Jan 29 09:00:13 crc kubenswrapper[5031]: I0129 09:00:13.776755 5031 generic.go:334] "Generic (PLEG): container finished" podID="36d96360-2b92-4cbf-8094-71193ef211c8" containerID="852e4c48e175527824cd6fb62c03c3547509bad32eaf2f324468c5fa6de8f05c" exitCode=1 Jan 29 09:00:13 crc kubenswrapper[5031]: I0129 09:00:13.776828 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerDied","Data":"852e4c48e175527824cd6fb62c03c3547509bad32eaf2f324468c5fa6de8f05c"} Jan 29 09:00:13 crc kubenswrapper[5031]: I0129 09:00:13.776942 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-central-agent" containerID="cri-o://61587c1369dee48be7708319e4d0edd6b7ecf0430e6aa5d70ac799022822e829" gracePeriod=30 Jan 29 09:00:13 crc kubenswrapper[5031]: I0129 09:00:13.776962 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="sg-core" containerID="cri-o://cd1b33435f3d6b7bc16b79e46d81b3c6a12825e908e98c808070e88501cad27c" gracePeriod=30 Jan 29 09:00:13 crc kubenswrapper[5031]: I0129 09:00:13.777003 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-notification-agent" containerID="cri-o://14e80de3bde3749f36b2888df4dec94cdbfc23b8ed2f225bfa5c22065d4f6942" gracePeriod=30 Jan 29 09:00:14 crc kubenswrapper[5031]: I0129 09:00:14.833585 5031 generic.go:334] "Generic (PLEG): container finished" podID="36d96360-2b92-4cbf-8094-71193ef211c8" containerID="cd1b33435f3d6b7bc16b79e46d81b3c6a12825e908e98c808070e88501cad27c" exitCode=2 Jan 29 09:00:14 crc kubenswrapper[5031]: I0129 09:00:14.833905 5031 generic.go:334] "Generic (PLEG): container finished" podID="36d96360-2b92-4cbf-8094-71193ef211c8" containerID="14e80de3bde3749f36b2888df4dec94cdbfc23b8ed2f225bfa5c22065d4f6942" exitCode=0 Jan 29 09:00:14 crc kubenswrapper[5031]: I0129 09:00:14.833928 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerDied","Data":"cd1b33435f3d6b7bc16b79e46d81b3c6a12825e908e98c808070e88501cad27c"} Jan 29 09:00:14 crc kubenswrapper[5031]: I0129 09:00:14.833955 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerDied","Data":"14e80de3bde3749f36b2888df4dec94cdbfc23b8ed2f225bfa5c22065d4f6942"} Jan 29 09:00:16 crc kubenswrapper[5031]: I0129 09:00:16.854275 5031 generic.go:334] "Generic (PLEG): container finished" podID="36d96360-2b92-4cbf-8094-71193ef211c8" containerID="61587c1369dee48be7708319e4d0edd6b7ecf0430e6aa5d70ac799022822e829" exitCode=0 Jan 29 09:00:16 crc kubenswrapper[5031]: I0129 09:00:16.854345 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerDied","Data":"61587c1369dee48be7708319e4d0edd6b7ecf0430e6aa5d70ac799022822e829"} Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.098432 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.126927 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-scripts\") pod \"36d96360-2b92-4cbf-8094-71193ef211c8\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.127026 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-combined-ca-bundle\") pod \"36d96360-2b92-4cbf-8094-71193ef211c8\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.127063 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd8q4\" (UniqueName: \"kubernetes.io/projected/36d96360-2b92-4cbf-8094-71193ef211c8-kube-api-access-rd8q4\") pod \"36d96360-2b92-4cbf-8094-71193ef211c8\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.127085 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-sg-core-conf-yaml\") pod \"36d96360-2b92-4cbf-8094-71193ef211c8\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.127114 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-run-httpd\") pod \"36d96360-2b92-4cbf-8094-71193ef211c8\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.127162 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-config-data\") pod \"36d96360-2b92-4cbf-8094-71193ef211c8\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.127210 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-log-httpd\") pod \"36d96360-2b92-4cbf-8094-71193ef211c8\" (UID: \"36d96360-2b92-4cbf-8094-71193ef211c8\") " Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.128128 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36d96360-2b92-4cbf-8094-71193ef211c8" (UID: "36d96360-2b92-4cbf-8094-71193ef211c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.129775 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36d96360-2b92-4cbf-8094-71193ef211c8" (UID: "36d96360-2b92-4cbf-8094-71193ef211c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.135630 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-scripts" (OuterVolumeSpecName: "scripts") pod "36d96360-2b92-4cbf-8094-71193ef211c8" (UID: "36d96360-2b92-4cbf-8094-71193ef211c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.135690 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d96360-2b92-4cbf-8094-71193ef211c8-kube-api-access-rd8q4" (OuterVolumeSpecName: "kube-api-access-rd8q4") pod "36d96360-2b92-4cbf-8094-71193ef211c8" (UID: "36d96360-2b92-4cbf-8094-71193ef211c8"). InnerVolumeSpecName "kube-api-access-rd8q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.160564 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36d96360-2b92-4cbf-8094-71193ef211c8" (UID: "36d96360-2b92-4cbf-8094-71193ef211c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.229454 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd8q4\" (UniqueName: \"kubernetes.io/projected/36d96360-2b92-4cbf-8094-71193ef211c8-kube-api-access-rd8q4\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.229493 5031 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.229505 5031 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.229516 5031 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d96360-2b92-4cbf-8094-71193ef211c8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.229527 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.253511 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36d96360-2b92-4cbf-8094-71193ef211c8" (UID: "36d96360-2b92-4cbf-8094-71193ef211c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.309561 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-config-data" (OuterVolumeSpecName: "config-data") pod "36d96360-2b92-4cbf-8094-71193ef211c8" (UID: "36d96360-2b92-4cbf-8094-71193ef211c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.331407 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.331456 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d96360-2b92-4cbf-8094-71193ef211c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.864644 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d96360-2b92-4cbf-8094-71193ef211c8","Type":"ContainerDied","Data":"13eab7ad78c2f58008df3c73a98cbfafb7d593ecd074626c2f06422671cfff3f"} Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.864946 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.864978 5031 scope.go:117] "RemoveContainer" containerID="852e4c48e175527824cd6fb62c03c3547509bad32eaf2f324468c5fa6de8f05c" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.897598 5031 scope.go:117] "RemoveContainer" containerID="cd1b33435f3d6b7bc16b79e46d81b3c6a12825e908e98c808070e88501cad27c" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.936236 5031 scope.go:117] "RemoveContainer" containerID="14e80de3bde3749f36b2888df4dec94cdbfc23b8ed2f225bfa5c22065d4f6942" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.950815 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.962992 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.972196 5031 scope.go:117] "RemoveContainer" containerID="61587c1369dee48be7708319e4d0edd6b7ecf0430e6aa5d70ac799022822e829" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.981728 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:17 crc kubenswrapper[5031]: E0129 09:00:17.982137 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-notification-agent" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982157 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-notification-agent" Jan 29 09:00:17 crc kubenswrapper[5031]: E0129 09:00:17.982173 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-central-agent" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982181 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-central-agent" Jan 29 09:00:17 crc kubenswrapper[5031]: E0129 09:00:17.982193 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="proxy-httpd" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982199 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="proxy-httpd" Jan 29 09:00:17 crc kubenswrapper[5031]: E0129 09:00:17.982224 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="sg-core" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982230 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="sg-core" Jan 29 09:00:17 crc kubenswrapper[5031]: E0129 09:00:17.982245 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577548b3-0ae4-42be-b7bf-a8a79788186e" containerName="collect-profiles" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982251 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="577548b3-0ae4-42be-b7bf-a8a79788186e" containerName="collect-profiles" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982425 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-central-agent" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982446 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="proxy-httpd" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982457 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="577548b3-0ae4-42be-b7bf-a8a79788186e" containerName="collect-profiles" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982480 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="ceilometer-notification-agent" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.982492 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" containerName="sg-core" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.984509 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.991257 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:00:17 crc kubenswrapper[5031]: I0129 09:00:17.991718 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.007660 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.008073 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.149585 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-scripts\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.149654 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.149770 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.149831 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.149897 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-config-data\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.150637 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-run-httpd\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.150767 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-log-httpd\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.150814 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9k9n\" (UniqueName: \"kubernetes.io/projected/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-kube-api-access-p9k9n\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.203975 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.265951 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9k9n\" (UniqueName: \"kubernetes.io/projected/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-kube-api-access-p9k9n\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.266052 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-scripts\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.266073 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.266118 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.266161 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.266211 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-config-data\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.266233 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-run-httpd\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.266249 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-log-httpd\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.273413 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.273455 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.273651 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-config-data\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.278944 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-log-httpd\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.279659 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-run-httpd\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.294086 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.295578 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-scripts\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.303617 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9k9n\" (UniqueName: \"kubernetes.io/projected/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-kube-api-access-p9k9n\") pod \"ceilometer-0\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.322225 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.323536 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d96360-2b92-4cbf-8094-71193ef211c8" path="/var/lib/kubelet/pods/36d96360-2b92-4cbf-8094-71193ef211c8/volumes" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.405808 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.732063 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c4fdc6744-xx4wj" Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.863161 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-97c68858b-9q587"] Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.863803 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-97c68858b-9q587" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-log" containerID="cri-o://9887758ff5d01c7a37bbae159f96c09381d7fef5fa405cabf927f23ebeb86ccb" gracePeriod=30 Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.863922 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-97c68858b-9q587" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-api" containerID="cri-o://14d76079584a5062e530f31b119d8ff265ab554fc478242705e5abba2fec2a30" gracePeriod=30 Jan 29 09:00:18 crc kubenswrapper[5031]: I0129 09:00:18.966988 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:19 crc kubenswrapper[5031]: I0129 09:00:19.897970 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerStarted","Data":"806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b"} Jan 29 09:00:19 crc kubenswrapper[5031]: I0129 09:00:19.898341 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerStarted","Data":"0c93b51be667c26fbc85ff651993673867232ff0867198119d50fe5fb89e6bd6"} Jan 29 09:00:19 crc kubenswrapper[5031]: I0129 09:00:19.900146 5031 generic.go:334] "Generic (PLEG): container finished" podID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerID="9887758ff5d01c7a37bbae159f96c09381d7fef5fa405cabf927f23ebeb86ccb" exitCode=143 Jan 29 09:00:19 crc kubenswrapper[5031]: I0129 09:00:19.900189 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-97c68858b-9q587" event={"ID":"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9","Type":"ContainerDied","Data":"9887758ff5d01c7a37bbae159f96c09381d7fef5fa405cabf927f23ebeb86ccb"} Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.618340 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dh6kp"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.619737 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.636260 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1468-account-create-update-zxksp"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.637661 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.643718 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.646307 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5z6m\" (UniqueName: \"kubernetes.io/projected/59a90621-3be5-48e7-a13e-296d459c61c2-kube-api-access-q5z6m\") pod \"nova-api-1468-account-create-update-zxksp\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.646409 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nxk\" (UniqueName: \"kubernetes.io/projected/54c8338c-c195-4bac-802a-bfa0ba3a7a35-kube-api-access-95nxk\") pod \"nova-api-db-create-dh6kp\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.646580 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a90621-3be5-48e7-a13e-296d459c61c2-operator-scripts\") pod \"nova-api-1468-account-create-update-zxksp\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.646615 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c8338c-c195-4bac-802a-bfa0ba3a7a35-operator-scripts\") pod \"nova-api-db-create-dh6kp\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.666231 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dh6kp"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.722474 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1468-account-create-update-zxksp"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.748306 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a90621-3be5-48e7-a13e-296d459c61c2-operator-scripts\") pod \"nova-api-1468-account-create-update-zxksp\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.748361 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c8338c-c195-4bac-802a-bfa0ba3a7a35-operator-scripts\") pod \"nova-api-db-create-dh6kp\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.748436 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5z6m\" (UniqueName: \"kubernetes.io/projected/59a90621-3be5-48e7-a13e-296d459c61c2-kube-api-access-q5z6m\") pod \"nova-api-1468-account-create-update-zxksp\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.748459 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95nxk\" (UniqueName: \"kubernetes.io/projected/54c8338c-c195-4bac-802a-bfa0ba3a7a35-kube-api-access-95nxk\") pod \"nova-api-db-create-dh6kp\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.749846 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a90621-3be5-48e7-a13e-296d459c61c2-operator-scripts\") pod \"nova-api-1468-account-create-update-zxksp\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.750775 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c8338c-c195-4bac-802a-bfa0ba3a7a35-operator-scripts\") pod \"nova-api-db-create-dh6kp\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.760582 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mhwm7"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.762021 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.769461 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5z6m\" (UniqueName: \"kubernetes.io/projected/59a90621-3be5-48e7-a13e-296d459c61c2-kube-api-access-q5z6m\") pod \"nova-api-1468-account-create-update-zxksp\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.770216 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95nxk\" (UniqueName: \"kubernetes.io/projected/54c8338c-c195-4bac-802a-bfa0ba3a7a35-kube-api-access-95nxk\") pod \"nova-api-db-create-dh6kp\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.771912 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mhwm7"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.847722 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-j7vsx"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.850277 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.861340 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j7vsx"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.874653 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2310-account-create-update-qtfvc"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.876348 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.879159 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.885196 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2310-account-create-update-qtfvc"] Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.924321 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerStarted","Data":"ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081"} Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.951916 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1244cfbf-875f-4291-be5d-bf559c363dd0-operator-scripts\") pod \"nova-cell1-db-create-j7vsx\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.951985 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5bwt\" (UniqueName: \"kubernetes.io/projected/e6b8d99f-c56f-4874-8540-82a133c05e28-kube-api-access-f5bwt\") pod \"nova-cell0-db-create-mhwm7\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.952699 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b8d99f-c56f-4874-8540-82a133c05e28-operator-scripts\") pod \"nova-cell0-db-create-mhwm7\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.952743 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htk7c\" (UniqueName: \"kubernetes.io/projected/1244cfbf-875f-4291-be5d-bf559c363dd0-kube-api-access-htk7c\") pod \"nova-cell1-db-create-j7vsx\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.957386 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:20 crc kubenswrapper[5031]: I0129 09:00:20.976847 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.023032 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d412-account-create-update-jsj2g"] Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.024228 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.027333 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.038106 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d412-account-create-update-jsj2g"] Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.057596 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqn6\" (UniqueName: \"kubernetes.io/projected/bbad268d-467d-4c4e-bdd4-0877a1311246-kube-api-access-pcqn6\") pod \"nova-cell0-2310-account-create-update-qtfvc\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.057692 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5bwt\" (UniqueName: \"kubernetes.io/projected/e6b8d99f-c56f-4874-8540-82a133c05e28-kube-api-access-f5bwt\") pod \"nova-cell0-db-create-mhwm7\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.058079 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b8d99f-c56f-4874-8540-82a133c05e28-operator-scripts\") pod \"nova-cell0-db-create-mhwm7\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.058105 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-operator-scripts\") pod \"nova-cell1-d412-account-create-update-jsj2g\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.058146 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htk7c\" (UniqueName: \"kubernetes.io/projected/1244cfbf-875f-4291-be5d-bf559c363dd0-kube-api-access-htk7c\") pod \"nova-cell1-db-create-j7vsx\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.058194 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbad268d-467d-4c4e-bdd4-0877a1311246-operator-scripts\") pod \"nova-cell0-2310-account-create-update-qtfvc\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.058237 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1244cfbf-875f-4291-be5d-bf559c363dd0-operator-scripts\") pod \"nova-cell1-db-create-j7vsx\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.058269 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj57w\" (UniqueName: \"kubernetes.io/projected/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-kube-api-access-nj57w\") pod \"nova-cell1-d412-account-create-update-jsj2g\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.059999 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b8d99f-c56f-4874-8540-82a133c05e28-operator-scripts\") pod \"nova-cell0-db-create-mhwm7\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.060502 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1244cfbf-875f-4291-be5d-bf559c363dd0-operator-scripts\") pod \"nova-cell1-db-create-j7vsx\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.097494 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htk7c\" (UniqueName: \"kubernetes.io/projected/1244cfbf-875f-4291-be5d-bf559c363dd0-kube-api-access-htk7c\") pod \"nova-cell1-db-create-j7vsx\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.100327 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5bwt\" (UniqueName: \"kubernetes.io/projected/e6b8d99f-c56f-4874-8540-82a133c05e28-kube-api-access-f5bwt\") pod \"nova-cell0-db-create-mhwm7\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.146507 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.158908 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-operator-scripts\") pod \"nova-cell1-d412-account-create-update-jsj2g\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.158983 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbad268d-467d-4c4e-bdd4-0877a1311246-operator-scripts\") pod \"nova-cell0-2310-account-create-update-qtfvc\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.159033 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj57w\" (UniqueName: \"kubernetes.io/projected/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-kube-api-access-nj57w\") pod \"nova-cell1-d412-account-create-update-jsj2g\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.159053 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqn6\" (UniqueName: \"kubernetes.io/projected/bbad268d-467d-4c4e-bdd4-0877a1311246-kube-api-access-pcqn6\") pod \"nova-cell0-2310-account-create-update-qtfvc\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.160152 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-operator-scripts\") pod \"nova-cell1-d412-account-create-update-jsj2g\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.171334 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbad268d-467d-4c4e-bdd4-0877a1311246-operator-scripts\") pod \"nova-cell0-2310-account-create-update-qtfvc\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.182309 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqn6\" (UniqueName: \"kubernetes.io/projected/bbad268d-467d-4c4e-bdd4-0877a1311246-kube-api-access-pcqn6\") pod \"nova-cell0-2310-account-create-update-qtfvc\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.182349 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj57w\" (UniqueName: \"kubernetes.io/projected/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-kube-api-access-nj57w\") pod \"nova-cell1-d412-account-create-update-jsj2g\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.191784 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.209153 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.486427 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.578816 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dh6kp"] Jan 29 09:00:21 crc kubenswrapper[5031]: W0129 09:00:21.660784 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54c8338c_c195_4bac_802a_bfa0ba3a7a35.slice/crio-cf077873254fec3152fa712ddef1dd85e4b6293fb0b302e4a7d1d444e99ef0ba WatchSource:0}: Error finding container cf077873254fec3152fa712ddef1dd85e4b6293fb0b302e4a7d1d444e99ef0ba: Status 404 returned error can't find the container with id cf077873254fec3152fa712ddef1dd85e4b6293fb0b302e4a7d1d444e99ef0ba Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.760802 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1468-account-create-update-zxksp"] Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.940322 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dh6kp" event={"ID":"54c8338c-c195-4bac-802a-bfa0ba3a7a35","Type":"ContainerStarted","Data":"cf077873254fec3152fa712ddef1dd85e4b6293fb0b302e4a7d1d444e99ef0ba"} Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.944571 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerStarted","Data":"a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89"} Jan 29 09:00:21 crc kubenswrapper[5031]: I0129 09:00:21.946521 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1468-account-create-update-zxksp" event={"ID":"59a90621-3be5-48e7-a13e-296d459c61c2","Type":"ContainerStarted","Data":"15035a46d63fa88311d2c8f4ac0631dae3415b54625a6233af206fc0c0eb7ba9"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.080834 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mhwm7"] Jan 29 09:00:22 crc kubenswrapper[5031]: W0129 09:00:22.083542 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6b8d99f_c56f_4874_8540_82a133c05e28.slice/crio-c75a6d713b1273a07cd67cdf88ac023786d00256a2478e32fc8a9aa01f80fa0c WatchSource:0}: Error finding container c75a6d713b1273a07cd67cdf88ac023786d00256a2478e32fc8a9aa01f80fa0c: Status 404 returned error can't find the container with id c75a6d713b1273a07cd67cdf88ac023786d00256a2478e32fc8a9aa01f80fa0c Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.212998 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2310-account-create-update-qtfvc"] Jan 29 09:00:22 crc kubenswrapper[5031]: W0129 09:00:22.222042 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbad268d_467d_4c4e_bdd4_0877a1311246.slice/crio-f3b005d4086ed019d0a878c9373e40afb6f698c5d4540dd7d609522268a2e21b WatchSource:0}: Error finding container f3b005d4086ed019d0a878c9373e40afb6f698c5d4540dd7d609522268a2e21b: Status 404 returned error can't find the container with id f3b005d4086ed019d0a878c9373e40afb6f698c5d4540dd7d609522268a2e21b Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.331163 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j7vsx"] Jan 29 09:00:22 crc kubenswrapper[5031]: W0129 09:00:22.344776 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1244cfbf_875f_4291_be5d_bf559c363dd0.slice/crio-0a2031e378e19ad75e67a432ea317850a46e2b5737e1cc870186b442a80f3d7b WatchSource:0}: Error finding container 0a2031e378e19ad75e67a432ea317850a46e2b5737e1cc870186b442a80f3d7b: Status 404 returned error can't find the container with id 0a2031e378e19ad75e67a432ea317850a46e2b5737e1cc870186b442a80f3d7b Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.385873 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d412-account-create-update-jsj2g"] Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.967859 5031 generic.go:334] "Generic (PLEG): container finished" podID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerID="14d76079584a5062e530f31b119d8ff265ab554fc478242705e5abba2fec2a30" exitCode=0 Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.967933 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-97c68858b-9q587" event={"ID":"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9","Type":"ContainerDied","Data":"14d76079584a5062e530f31b119d8ff265ab554fc478242705e5abba2fec2a30"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.967961 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-97c68858b-9q587" event={"ID":"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9","Type":"ContainerDied","Data":"f9be6528fa23d8a4c7af6e0a46b35a2d896c0d67e9250f3230e288f29753ccb1"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.968171 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9be6528fa23d8a4c7af6e0a46b35a2d896c0d67e9250f3230e288f29753ccb1" Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.970087 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1468-account-create-update-zxksp" event={"ID":"59a90621-3be5-48e7-a13e-296d459c61c2","Type":"ContainerStarted","Data":"63b1b5ce9df979fd01f1642dcea92c09400990faf3be9c9ab3ffc3cd1e1f7285"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.972030 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" event={"ID":"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b","Type":"ContainerStarted","Data":"9863239b29df4efaf6ecf6f0938b7b5029802d64fbb18497679d9d934b437717"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.972114 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" event={"ID":"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b","Type":"ContainerStarted","Data":"2191e0eebe9d8f70422e2b84317f11026cdacae347b2ea7f52f40c963ea7b0ef"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.974660 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dh6kp" event={"ID":"54c8338c-c195-4bac-802a-bfa0ba3a7a35","Type":"ContainerStarted","Data":"8c6e53ac4e0efffeadd04f4d745cff34d60ff0a8b4c9908d4a0cd82132a3d6fe"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.978778 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j7vsx" event={"ID":"1244cfbf-875f-4291-be5d-bf559c363dd0","Type":"ContainerStarted","Data":"1f4ccfacd2124f8247600b85bbe7bf621c13fef4ee7dded316dddef5657d3517"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.978849 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j7vsx" event={"ID":"1244cfbf-875f-4291-be5d-bf559c363dd0","Type":"ContainerStarted","Data":"0a2031e378e19ad75e67a432ea317850a46e2b5737e1cc870186b442a80f3d7b"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.981472 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" event={"ID":"bbad268d-467d-4c4e-bdd4-0877a1311246","Type":"ContainerStarted","Data":"a688e531f54eb1d214d8c7658d8334103fe99fb6e0ca70bf24c17119ed692a7a"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.981530 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" event={"ID":"bbad268d-467d-4c4e-bdd4-0877a1311246","Type":"ContainerStarted","Data":"f3b005d4086ed019d0a878c9373e40afb6f698c5d4540dd7d609522268a2e21b"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.989453 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mhwm7" event={"ID":"e6b8d99f-c56f-4874-8540-82a133c05e28","Type":"ContainerStarted","Data":"6872c7a00cc9539aadb44a405693ce20d1719922f39eab6e08fb555030a9ae62"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.989503 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mhwm7" event={"ID":"e6b8d99f-c56f-4874-8540-82a133c05e28","Type":"ContainerStarted","Data":"c75a6d713b1273a07cd67cdf88ac023786d00256a2478e32fc8a9aa01f80fa0c"} Jan 29 09:00:22 crc kubenswrapper[5031]: I0129 09:00:22.993891 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1468-account-create-update-zxksp" podStartSLOduration=2.993870915 podStartE2EDuration="2.993870915s" podCreationTimestamp="2026-01-29 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:22.989710763 +0000 UTC m=+1303.489298725" watchObservedRunningTime="2026-01-29 09:00:22.993870915 +0000 UTC m=+1303.493458867" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.010981 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-mhwm7" podStartSLOduration=3.010964603 podStartE2EDuration="3.010964603s" podCreationTimestamp="2026-01-29 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:23.008109877 +0000 UTC m=+1303.507697829" watchObservedRunningTime="2026-01-29 09:00:23.010964603 +0000 UTC m=+1303.510552545" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.027358 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-97c68858b-9q587" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.034648 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-dh6kp" podStartSLOduration=3.034628568 podStartE2EDuration="3.034628568s" podCreationTimestamp="2026-01-29 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:23.02201209 +0000 UTC m=+1303.521600052" watchObservedRunningTime="2026-01-29 09:00:23.034628568 +0000 UTC m=+1303.534216530" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.036409 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-scripts\") pod \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.036484 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-logs\") pod \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.036505 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-internal-tls-certs\") pod \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.036565 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-combined-ca-bundle\") pod \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.036593 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-public-tls-certs\") pod \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.036665 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-config-data\") pod \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.036778 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfj2w\" (UniqueName: \"kubernetes.io/projected/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-kube-api-access-qfj2w\") pod \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\" (UID: \"4cfbcd95-dc6d-4ee1-81bb-95ef595499e9\") " Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.041310 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-logs" (OuterVolumeSpecName: "logs") pod "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" (UID: "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.049233 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-kube-api-access-qfj2w" (OuterVolumeSpecName: "kube-api-access-qfj2w") pod "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" (UID: "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9"). InnerVolumeSpecName "kube-api-access-qfj2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.049290 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-scripts" (OuterVolumeSpecName: "scripts") pod "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" (UID: "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.051026 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" podStartSLOduration=3.051006248 podStartE2EDuration="3.051006248s" podCreationTimestamp="2026-01-29 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:23.045717806 +0000 UTC m=+1303.545305758" watchObservedRunningTime="2026-01-29 09:00:23.051006248 +0000 UTC m=+1303.550594200" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.102636 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" podStartSLOduration=2.102603073 podStartE2EDuration="2.102603073s" podCreationTimestamp="2026-01-29 09:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:23.065918059 +0000 UTC m=+1303.565506001" watchObservedRunningTime="2026-01-29 09:00:23.102603073 +0000 UTC m=+1303.602191035" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.107861 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-j7vsx" podStartSLOduration=3.107841834 podStartE2EDuration="3.107841834s" podCreationTimestamp="2026-01-29 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:23.085556256 +0000 UTC m=+1303.585144218" watchObservedRunningTime="2026-01-29 09:00:23.107841834 +0000 UTC m=+1303.607429786" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.133991 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" (UID: "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.138833 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.138868 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.138881 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfj2w\" (UniqueName: \"kubernetes.io/projected/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-kube-api-access-qfj2w\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.138894 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.161703 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-config-data" (OuterVolumeSpecName: "config-data") pod "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" (UID: "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.217075 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" (UID: "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.218522 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" (UID: "4cfbcd95-dc6d-4ee1-81bb-95ef595499e9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.241578 5031 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.241614 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:23 crc kubenswrapper[5031]: I0129 09:00:23.241623 5031 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.002816 5031 generic.go:334] "Generic (PLEG): container finished" podID="e6b8d99f-c56f-4874-8540-82a133c05e28" containerID="6872c7a00cc9539aadb44a405693ce20d1719922f39eab6e08fb555030a9ae62" exitCode=0 Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.002932 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mhwm7" event={"ID":"e6b8d99f-c56f-4874-8540-82a133c05e28","Type":"ContainerDied","Data":"6872c7a00cc9539aadb44a405693ce20d1719922f39eab6e08fb555030a9ae62"} Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.007300 5031 generic.go:334] "Generic (PLEG): container finished" podID="59a90621-3be5-48e7-a13e-296d459c61c2" containerID="63b1b5ce9df979fd01f1642dcea92c09400990faf3be9c9ab3ffc3cd1e1f7285" exitCode=0 Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.007384 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1468-account-create-update-zxksp" event={"ID":"59a90621-3be5-48e7-a13e-296d459c61c2","Type":"ContainerDied","Data":"63b1b5ce9df979fd01f1642dcea92c09400990faf3be9c9ab3ffc3cd1e1f7285"} Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.009449 5031 generic.go:334] "Generic (PLEG): container finished" podID="27bdb3af-5e68-4db3-a04a-b8dda8d56d3b" containerID="9863239b29df4efaf6ecf6f0938b7b5029802d64fbb18497679d9d934b437717" exitCode=0 Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.009547 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" event={"ID":"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b","Type":"ContainerDied","Data":"9863239b29df4efaf6ecf6f0938b7b5029802d64fbb18497679d9d934b437717"} Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.011210 5031 generic.go:334] "Generic (PLEG): container finished" podID="54c8338c-c195-4bac-802a-bfa0ba3a7a35" containerID="8c6e53ac4e0efffeadd04f4d745cff34d60ff0a8b4c9908d4a0cd82132a3d6fe" exitCode=0 Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.011237 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dh6kp" event={"ID":"54c8338c-c195-4bac-802a-bfa0ba3a7a35","Type":"ContainerDied","Data":"8c6e53ac4e0efffeadd04f4d745cff34d60ff0a8b4c9908d4a0cd82132a3d6fe"} Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.012642 5031 generic.go:334] "Generic (PLEG): container finished" podID="1244cfbf-875f-4291-be5d-bf559c363dd0" containerID="1f4ccfacd2124f8247600b85bbe7bf621c13fef4ee7dded316dddef5657d3517" exitCode=0 Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.012713 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j7vsx" event={"ID":"1244cfbf-875f-4291-be5d-bf559c363dd0","Type":"ContainerDied","Data":"1f4ccfacd2124f8247600b85bbe7bf621c13fef4ee7dded316dddef5657d3517"} Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.014438 5031 generic.go:334] "Generic (PLEG): container finished" podID="bbad268d-467d-4c4e-bdd4-0877a1311246" containerID="a688e531f54eb1d214d8c7658d8334103fe99fb6e0ca70bf24c17119ed692a7a" exitCode=0 Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.014466 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" event={"ID":"bbad268d-467d-4c4e-bdd4-0877a1311246","Type":"ContainerDied","Data":"a688e531f54eb1d214d8c7658d8334103fe99fb6e0ca70bf24c17119ed692a7a"} Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.014670 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-97c68858b-9q587" Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.155997 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-97c68858b-9q587"] Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.164448 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-97c68858b-9q587"] Jan 29 09:00:24 crc kubenswrapper[5031]: I0129 09:00:24.295690 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" path="/var/lib/kubelet/pods/4cfbcd95-dc6d-4ee1-81bb-95ef595499e9/volumes" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.047657 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerStarted","Data":"dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95"} Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.048305 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.085376 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.44741215 podStartE2EDuration="8.085330353s" podCreationTimestamp="2026-01-29 09:00:17 +0000 UTC" firstStartedPulling="2026-01-29 09:00:18.958037926 +0000 UTC m=+1299.457625878" lastFinishedPulling="2026-01-29 09:00:24.595956129 +0000 UTC m=+1305.095544081" observedRunningTime="2026-01-29 09:00:25.072984881 +0000 UTC m=+1305.572572833" watchObservedRunningTime="2026-01-29 09:00:25.085330353 +0000 UTC m=+1305.584918305" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.560216 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.713628 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1244cfbf-875f-4291-be5d-bf559c363dd0-operator-scripts\") pod \"1244cfbf-875f-4291-be5d-bf559c363dd0\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.713768 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htk7c\" (UniqueName: \"kubernetes.io/projected/1244cfbf-875f-4291-be5d-bf559c363dd0-kube-api-access-htk7c\") pod \"1244cfbf-875f-4291-be5d-bf559c363dd0\" (UID: \"1244cfbf-875f-4291-be5d-bf559c363dd0\") " Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.714357 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1244cfbf-875f-4291-be5d-bf559c363dd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1244cfbf-875f-4291-be5d-bf559c363dd0" (UID: "1244cfbf-875f-4291-be5d-bf559c363dd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.722943 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1244cfbf-875f-4291-be5d-bf559c363dd0-kube-api-access-htk7c" (OuterVolumeSpecName: "kube-api-access-htk7c") pod "1244cfbf-875f-4291-be5d-bf559c363dd0" (UID: "1244cfbf-875f-4291-be5d-bf559c363dd0"). InnerVolumeSpecName "kube-api-access-htk7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.800802 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.809944 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.816931 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1244cfbf-875f-4291-be5d-bf559c363dd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.816973 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htk7c\" (UniqueName: \"kubernetes.io/projected/1244cfbf-875f-4291-be5d-bf559c363dd0-kube-api-access-htk7c\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.842019 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.853873 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.881806 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.917646 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c8338c-c195-4bac-802a-bfa0ba3a7a35-operator-scripts\") pod \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.917788 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95nxk\" (UniqueName: \"kubernetes.io/projected/54c8338c-c195-4bac-802a-bfa0ba3a7a35-kube-api-access-95nxk\") pod \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\" (UID: \"54c8338c-c195-4bac-802a-bfa0ba3a7a35\") " Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.917827 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj57w\" (UniqueName: \"kubernetes.io/projected/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-kube-api-access-nj57w\") pod \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.917948 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-operator-scripts\") pod \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\" (UID: \"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b\") " Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.918063 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54c8338c-c195-4bac-802a-bfa0ba3a7a35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54c8338c-c195-4bac-802a-bfa0ba3a7a35" (UID: "54c8338c-c195-4bac-802a-bfa0ba3a7a35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.918568 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c8338c-c195-4bac-802a-bfa0ba3a7a35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.919027 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27bdb3af-5e68-4db3-a04a-b8dda8d56d3b" (UID: "27bdb3af-5e68-4db3-a04a-b8dda8d56d3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.923082 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54c8338c-c195-4bac-802a-bfa0ba3a7a35-kube-api-access-95nxk" (OuterVolumeSpecName: "kube-api-access-95nxk") pod "54c8338c-c195-4bac-802a-bfa0ba3a7a35" (UID: "54c8338c-c195-4bac-802a-bfa0ba3a7a35"). InnerVolumeSpecName "kube-api-access-95nxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:25 crc kubenswrapper[5031]: I0129 09:00:25.923675 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-kube-api-access-nj57w" (OuterVolumeSpecName: "kube-api-access-nj57w") pod "27bdb3af-5e68-4db3-a04a-b8dda8d56d3b" (UID: "27bdb3af-5e68-4db3-a04a-b8dda8d56d3b"). InnerVolumeSpecName "kube-api-access-nj57w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024218 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbad268d-467d-4c4e-bdd4-0877a1311246-operator-scripts\") pod \"bbad268d-467d-4c4e-bdd4-0877a1311246\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024343 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5z6m\" (UniqueName: \"kubernetes.io/projected/59a90621-3be5-48e7-a13e-296d459c61c2-kube-api-access-q5z6m\") pod \"59a90621-3be5-48e7-a13e-296d459c61c2\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024425 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b8d99f-c56f-4874-8540-82a133c05e28-operator-scripts\") pod \"e6b8d99f-c56f-4874-8540-82a133c05e28\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024510 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5bwt\" (UniqueName: \"kubernetes.io/projected/e6b8d99f-c56f-4874-8540-82a133c05e28-kube-api-access-f5bwt\") pod \"e6b8d99f-c56f-4874-8540-82a133c05e28\" (UID: \"e6b8d99f-c56f-4874-8540-82a133c05e28\") " Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024530 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcqn6\" (UniqueName: \"kubernetes.io/projected/bbad268d-467d-4c4e-bdd4-0877a1311246-kube-api-access-pcqn6\") pod \"bbad268d-467d-4c4e-bdd4-0877a1311246\" (UID: \"bbad268d-467d-4c4e-bdd4-0877a1311246\") " Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024562 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a90621-3be5-48e7-a13e-296d459c61c2-operator-scripts\") pod \"59a90621-3be5-48e7-a13e-296d459c61c2\" (UID: \"59a90621-3be5-48e7-a13e-296d459c61c2\") " Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024864 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj57w\" (UniqueName: \"kubernetes.io/projected/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-kube-api-access-nj57w\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024883 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.024896 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95nxk\" (UniqueName: \"kubernetes.io/projected/54c8338c-c195-4bac-802a-bfa0ba3a7a35-kube-api-access-95nxk\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.025020 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b8d99f-c56f-4874-8540-82a133c05e28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6b8d99f-c56f-4874-8540-82a133c05e28" (UID: "e6b8d99f-c56f-4874-8540-82a133c05e28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.025208 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a90621-3be5-48e7-a13e-296d459c61c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59a90621-3be5-48e7-a13e-296d459c61c2" (UID: "59a90621-3be5-48e7-a13e-296d459c61c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.025275 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbad268d-467d-4c4e-bdd4-0877a1311246-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbad268d-467d-4c4e-bdd4-0877a1311246" (UID: "bbad268d-467d-4c4e-bdd4-0877a1311246"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.028705 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a90621-3be5-48e7-a13e-296d459c61c2-kube-api-access-q5z6m" (OuterVolumeSpecName: "kube-api-access-q5z6m") pod "59a90621-3be5-48e7-a13e-296d459c61c2" (UID: "59a90621-3be5-48e7-a13e-296d459c61c2"). InnerVolumeSpecName "kube-api-access-q5z6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.028721 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b8d99f-c56f-4874-8540-82a133c05e28-kube-api-access-f5bwt" (OuterVolumeSpecName: "kube-api-access-f5bwt") pod "e6b8d99f-c56f-4874-8540-82a133c05e28" (UID: "e6b8d99f-c56f-4874-8540-82a133c05e28"). InnerVolumeSpecName "kube-api-access-f5bwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.032721 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbad268d-467d-4c4e-bdd4-0877a1311246-kube-api-access-pcqn6" (OuterVolumeSpecName: "kube-api-access-pcqn6") pod "bbad268d-467d-4c4e-bdd4-0877a1311246" (UID: "bbad268d-467d-4c4e-bdd4-0877a1311246"). InnerVolumeSpecName "kube-api-access-pcqn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.070404 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1468-account-create-update-zxksp" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.071641 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1468-account-create-update-zxksp" event={"ID":"59a90621-3be5-48e7-a13e-296d459c61c2","Type":"ContainerDied","Data":"15035a46d63fa88311d2c8f4ac0631dae3415b54625a6233af206fc0c0eb7ba9"} Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.071680 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15035a46d63fa88311d2c8f4ac0631dae3415b54625a6233af206fc0c0eb7ba9" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.076786 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" event={"ID":"27bdb3af-5e68-4db3-a04a-b8dda8d56d3b","Type":"ContainerDied","Data":"2191e0eebe9d8f70422e2b84317f11026cdacae347b2ea7f52f40c963ea7b0ef"} Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.077063 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2191e0eebe9d8f70422e2b84317f11026cdacae347b2ea7f52f40c963ea7b0ef" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.076911 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d412-account-create-update-jsj2g" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.078432 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dh6kp" event={"ID":"54c8338c-c195-4bac-802a-bfa0ba3a7a35","Type":"ContainerDied","Data":"cf077873254fec3152fa712ddef1dd85e4b6293fb0b302e4a7d1d444e99ef0ba"} Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.078455 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf077873254fec3152fa712ddef1dd85e4b6293fb0b302e4a7d1d444e99ef0ba" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.078503 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dh6kp" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.084613 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j7vsx" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.084616 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j7vsx" event={"ID":"1244cfbf-875f-4291-be5d-bf559c363dd0","Type":"ContainerDied","Data":"0a2031e378e19ad75e67a432ea317850a46e2b5737e1cc870186b442a80f3d7b"} Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.085024 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2031e378e19ad75e67a432ea317850a46e2b5737e1cc870186b442a80f3d7b" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.085952 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" event={"ID":"bbad268d-467d-4c4e-bdd4-0877a1311246","Type":"ContainerDied","Data":"f3b005d4086ed019d0a878c9373e40afb6f698c5d4540dd7d609522268a2e21b"} Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.085995 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b005d4086ed019d0a878c9373e40afb6f698c5d4540dd7d609522268a2e21b" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.086059 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2310-account-create-update-qtfvc" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.090845 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mhwm7" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.090883 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mhwm7" event={"ID":"e6b8d99f-c56f-4874-8540-82a133c05e28","Type":"ContainerDied","Data":"c75a6d713b1273a07cd67cdf88ac023786d00256a2478e32fc8a9aa01f80fa0c"} Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.090931 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c75a6d713b1273a07cd67cdf88ac023786d00256a2478e32fc8a9aa01f80fa0c" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.126490 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5bwt\" (UniqueName: \"kubernetes.io/projected/e6b8d99f-c56f-4874-8540-82a133c05e28-kube-api-access-f5bwt\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.126615 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcqn6\" (UniqueName: \"kubernetes.io/projected/bbad268d-467d-4c4e-bdd4-0877a1311246-kube-api-access-pcqn6\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.126651 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a90621-3be5-48e7-a13e-296d459c61c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.126668 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbad268d-467d-4c4e-bdd4-0877a1311246-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.126684 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5z6m\" (UniqueName: \"kubernetes.io/projected/59a90621-3be5-48e7-a13e-296d459c61c2-kube-api-access-q5z6m\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:26 crc kubenswrapper[5031]: I0129 09:00:26.126704 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b8d99f-c56f-4874-8540-82a133c05e28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.209750 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jr7x7"] Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211278 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bdb3af-5e68-4db3-a04a-b8dda8d56d3b" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211298 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bdb3af-5e68-4db3-a04a-b8dda8d56d3b" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211316 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c8338c-c195-4bac-802a-bfa0ba3a7a35" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211322 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c8338c-c195-4bac-802a-bfa0ba3a7a35" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211351 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-log" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211360 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-log" Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211399 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbad268d-467d-4c4e-bdd4-0877a1311246" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211405 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbad268d-467d-4c4e-bdd4-0877a1311246" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211422 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1244cfbf-875f-4291-be5d-bf559c363dd0" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211429 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1244cfbf-875f-4291-be5d-bf559c363dd0" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211452 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b8d99f-c56f-4874-8540-82a133c05e28" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211460 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b8d99f-c56f-4874-8540-82a133c05e28" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211477 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-api" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211484 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-api" Jan 29 09:00:31 crc kubenswrapper[5031]: E0129 09:00:31.211497 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a90621-3be5-48e7-a13e-296d459c61c2" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211505 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a90621-3be5-48e7-a13e-296d459c61c2" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211692 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbad268d-467d-4c4e-bdd4-0877a1311246" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211709 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c8338c-c195-4bac-802a-bfa0ba3a7a35" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211720 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-log" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211732 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b8d99f-c56f-4874-8540-82a133c05e28" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211741 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bdb3af-5e68-4db3-a04a-b8dda8d56d3b" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211751 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1244cfbf-875f-4291-be5d-bf559c363dd0" containerName="mariadb-database-create" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211760 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a90621-3be5-48e7-a13e-296d459c61c2" containerName="mariadb-account-create-update" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.211774 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfbcd95-dc6d-4ee1-81bb-95ef595499e9" containerName="placement-api" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.217430 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.229251 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.229723 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.230065 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pmvxd" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.244521 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-scripts\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.244682 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4bc\" (UniqueName: \"kubernetes.io/projected/c240bea9-22e4-4a3c-8237-0d09838c72d9-kube-api-access-vp4bc\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.244785 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-config-data\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.245279 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.255239 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jr7x7"] Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.347962 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-config-data\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.348274 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.348393 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-scripts\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.348495 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4bc\" (UniqueName: \"kubernetes.io/projected/c240bea9-22e4-4a3c-8237-0d09838c72d9-kube-api-access-vp4bc\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.355308 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.357208 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-scripts\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.358108 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-config-data\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.366399 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4bc\" (UniqueName: \"kubernetes.io/projected/c240bea9-22e4-4a3c-8237-0d09838c72d9-kube-api-access-vp4bc\") pod \"nova-cell0-conductor-db-sync-jr7x7\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.549221 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.589840 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.590192 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-central-agent" containerID="cri-o://806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b" gracePeriod=30 Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.590341 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="sg-core" containerID="cri-o://a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89" gracePeriod=30 Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.590447 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-notification-agent" containerID="cri-o://ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081" gracePeriod=30 Jan 29 09:00:31 crc kubenswrapper[5031]: I0129 09:00:31.590306 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="proxy-httpd" containerID="cri-o://dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95" gracePeriod=30 Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.076416 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jr7x7"] Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.154817 5031 generic.go:334] "Generic (PLEG): container finished" podID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerID="dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95" exitCode=0 Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.154903 5031 generic.go:334] "Generic (PLEG): container finished" podID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerID="a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89" exitCode=2 Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.154912 5031 generic.go:334] "Generic (PLEG): container finished" podID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerID="806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b" exitCode=0 Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.154919 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerDied","Data":"dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95"} Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.154972 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerDied","Data":"a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89"} Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.154988 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerDied","Data":"806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b"} Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.156335 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" event={"ID":"c240bea9-22e4-4a3c-8237-0d09838c72d9","Type":"ContainerStarted","Data":"f69e12f0f6f6a4227503397dbf0668196cb533e7f169a80e6043ae5b8ae0efef"} Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.853115 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.876999 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-scripts\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.877330 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-ceilometer-tls-certs\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.877466 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9k9n\" (UniqueName: \"kubernetes.io/projected/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-kube-api-access-p9k9n\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.877653 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-config-data\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.877911 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-combined-ca-bundle\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.878419 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-run-httpd\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.878539 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-log-httpd\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.878649 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-sg-core-conf-yaml\") pod \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\" (UID: \"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a\") " Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.879206 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.879341 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.883690 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-kube-api-access-p9k9n" (OuterVolumeSpecName: "kube-api-access-p9k9n") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "kube-api-access-p9k9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.906325 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-scripts" (OuterVolumeSpecName: "scripts") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.946458 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.958931 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.981517 5031 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.981570 5031 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.981580 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.981588 5031 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.981596 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9k9n\" (UniqueName: \"kubernetes.io/projected/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-kube-api-access-p9k9n\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.982788 5031 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:32 crc kubenswrapper[5031]: I0129 09:00:32.990726 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.016862 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-config-data" (OuterVolumeSpecName: "config-data") pod "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" (UID: "0da113c3-1445-4633-a1d6-b3cd6d6a3d7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.084768 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.084813 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.168169 5031 generic.go:334] "Generic (PLEG): container finished" podID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerID="ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081" exitCode=0 Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.168218 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerDied","Data":"ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081"} Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.168267 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.168291 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da113c3-1445-4633-a1d6-b3cd6d6a3d7a","Type":"ContainerDied","Data":"0c93b51be667c26fbc85ff651993673867232ff0867198119d50fe5fb89e6bd6"} Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.168337 5031 scope.go:117] "RemoveContainer" containerID="dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.204813 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.212534 5031 scope.go:117] "RemoveContainer" containerID="a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.231018 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.243013 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.243677 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-notification-agent" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.243703 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-notification-agent" Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.243721 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="proxy-httpd" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.243728 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="proxy-httpd" Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.243755 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="sg-core" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.243763 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="sg-core" Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.243788 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-central-agent" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.243800 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-central-agent" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.244004 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="sg-core" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.244022 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-notification-agent" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.244039 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="proxy-httpd" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.244051 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" containerName="ceilometer-central-agent" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.245960 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.248052 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.248705 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.252214 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.253454 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.282469 5031 scope.go:117] "RemoveContainer" containerID="ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.287738 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.287790 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-run-httpd\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.287826 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.288065 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-config-data\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.288386 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-scripts\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.288421 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4qjg\" (UniqueName: \"kubernetes.io/projected/8c4a414d-85d4-4586-a252-47b7db649478-kube-api-access-c4qjg\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.288654 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-log-httpd\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.288716 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.313641 5031 scope.go:117] "RemoveContainer" containerID="806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.337408 5031 scope.go:117] "RemoveContainer" containerID="dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95" Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.338110 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95\": container with ID starting with dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95 not found: ID does not exist" containerID="dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.338150 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95"} err="failed to get container status \"dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95\": rpc error: code = NotFound desc = could not find container \"dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95\": container with ID starting with dfd65584c15652cb3a68bf55e16ebe26e8c8c46735bcc509169302b6fbd1fe95 not found: ID does not exist" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.338175 5031 scope.go:117] "RemoveContainer" containerID="a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89" Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.338577 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89\": container with ID starting with a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89 not found: ID does not exist" containerID="a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.338594 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89"} err="failed to get container status \"a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89\": rpc error: code = NotFound desc = could not find container \"a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89\": container with ID starting with a5455228bf055e3633a89519084f8b2ced64a0efddff1d37cab23cd8b7806c89 not found: ID does not exist" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.338607 5031 scope.go:117] "RemoveContainer" containerID="ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081" Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.338835 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081\": container with ID starting with ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081 not found: ID does not exist" containerID="ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.338863 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081"} err="failed to get container status \"ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081\": rpc error: code = NotFound desc = could not find container \"ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081\": container with ID starting with ed0bdea141a3d683832094ebda0d8568417277bd6e812bcebbc4b0b47fc7a081 not found: ID does not exist" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.338874 5031 scope.go:117] "RemoveContainer" containerID="806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b" Jan 29 09:00:33 crc kubenswrapper[5031]: E0129 09:00:33.339205 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b\": container with ID starting with 806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b not found: ID does not exist" containerID="806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.339261 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b"} err="failed to get container status \"806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b\": rpc error: code = NotFound desc = could not find container \"806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b\": container with ID starting with 806ef190744e63111c07d4ffa175646ad5ec811f09ebf31fa33dfbbca708618b not found: ID does not exist" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.392733 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-scripts\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.392780 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4qjg\" (UniqueName: \"kubernetes.io/projected/8c4a414d-85d4-4586-a252-47b7db649478-kube-api-access-c4qjg\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.393668 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-log-httpd\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.393720 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.393772 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.393802 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-run-httpd\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.394131 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-log-httpd\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.394154 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.394693 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-config-data\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.395059 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-run-httpd\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.398306 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.398858 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.399737 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.400389 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-scripts\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.403804 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-config-data\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.413730 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4qjg\" (UniqueName: \"kubernetes.io/projected/8c4a414d-85d4-4586-a252-47b7db649478-kube-api-access-c4qjg\") pod \"ceilometer-0\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " pod="openstack/ceilometer-0" Jan 29 09:00:33 crc kubenswrapper[5031]: I0129 09:00:33.566954 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:00:34 crc kubenswrapper[5031]: I0129 09:00:34.047814 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:00:34 crc kubenswrapper[5031]: W0129 09:00:34.056667 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c4a414d_85d4_4586_a252_47b7db649478.slice/crio-ff6df5cd21d1a2769f9b095ac099e5fbe0dbb48c79ff022df7370035ebf974bd WatchSource:0}: Error finding container ff6df5cd21d1a2769f9b095ac099e5fbe0dbb48c79ff022df7370035ebf974bd: Status 404 returned error can't find the container with id ff6df5cd21d1a2769f9b095ac099e5fbe0dbb48c79ff022df7370035ebf974bd Jan 29 09:00:34 crc kubenswrapper[5031]: I0129 09:00:34.182718 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerStarted","Data":"ff6df5cd21d1a2769f9b095ac099e5fbe0dbb48c79ff022df7370035ebf974bd"} Jan 29 09:00:34 crc kubenswrapper[5031]: I0129 09:00:34.293194 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da113c3-1445-4633-a1d6-b3cd6d6a3d7a" path="/var/lib/kubelet/pods/0da113c3-1445-4633-a1d6-b3cd6d6a3d7a/volumes" Jan 29 09:00:35 crc kubenswrapper[5031]: I0129 09:00:35.192714 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerStarted","Data":"8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9"} Jan 29 09:00:38 crc kubenswrapper[5031]: I0129 09:00:38.494908 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:00:38 crc kubenswrapper[5031]: I0129 09:00:38.495663 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:00:41 crc kubenswrapper[5031]: I0129 09:00:41.249749 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerStarted","Data":"caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7"} Jan 29 09:00:41 crc kubenswrapper[5031]: I0129 09:00:41.252510 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" event={"ID":"c240bea9-22e4-4a3c-8237-0d09838c72d9","Type":"ContainerStarted","Data":"a251549a584fc8b9cff455b6494c6e42b9aa45b3e0f041d3471f2293a6ad4592"} Jan 29 09:00:41 crc kubenswrapper[5031]: I0129 09:00:41.280099 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" podStartSLOduration=1.7995991980000001 podStartE2EDuration="10.280077365s" podCreationTimestamp="2026-01-29 09:00:31 +0000 UTC" firstStartedPulling="2026-01-29 09:00:32.084033445 +0000 UTC m=+1312.583621397" lastFinishedPulling="2026-01-29 09:00:40.564511612 +0000 UTC m=+1321.064099564" observedRunningTime="2026-01-29 09:00:41.27430316 +0000 UTC m=+1321.773891122" watchObservedRunningTime="2026-01-29 09:00:41.280077365 +0000 UTC m=+1321.779665317" Jan 29 09:00:42 crc kubenswrapper[5031]: I0129 09:00:42.264418 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerStarted","Data":"66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df"} Jan 29 09:00:45 crc kubenswrapper[5031]: I0129 09:00:45.323467 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerStarted","Data":"405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853"} Jan 29 09:00:45 crc kubenswrapper[5031]: I0129 09:00:45.325201 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:00:45 crc kubenswrapper[5031]: I0129 09:00:45.350204 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.048798049 podStartE2EDuration="12.350182413s" podCreationTimestamp="2026-01-29 09:00:33 +0000 UTC" firstStartedPulling="2026-01-29 09:00:34.058999726 +0000 UTC m=+1314.558587678" lastFinishedPulling="2026-01-29 09:00:44.36038409 +0000 UTC m=+1324.859972042" observedRunningTime="2026-01-29 09:00:45.347317176 +0000 UTC m=+1325.846905148" watchObservedRunningTime="2026-01-29 09:00:45.350182413 +0000 UTC m=+1325.849770365" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.151803 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29494621-vw7kq"] Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.153704 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.164765 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494621-vw7kq"] Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.271801 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsd8v\" (UniqueName: \"kubernetes.io/projected/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-kube-api-access-tsd8v\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.272203 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-config-data\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.272457 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-combined-ca-bundle\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.272506 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-fernet-keys\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.374479 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsd8v\" (UniqueName: \"kubernetes.io/projected/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-kube-api-access-tsd8v\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.374576 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-config-data\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.374670 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-combined-ca-bundle\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.374709 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-fernet-keys\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.382601 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-config-data\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.383393 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-combined-ca-bundle\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.384090 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-fernet-keys\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.394484 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsd8v\" (UniqueName: \"kubernetes.io/projected/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-kube-api-access-tsd8v\") pod \"keystone-cron-29494621-vw7kq\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.447128 5031 generic.go:334] "Generic (PLEG): container finished" podID="c240bea9-22e4-4a3c-8237-0d09838c72d9" containerID="a251549a584fc8b9cff455b6494c6e42b9aa45b3e0f041d3471f2293a6ad4592" exitCode=0 Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.447190 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" event={"ID":"c240bea9-22e4-4a3c-8237-0d09838c72d9","Type":"ContainerDied","Data":"a251549a584fc8b9cff455b6494c6e42b9aa45b3e0f041d3471f2293a6ad4592"} Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.471715 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:00 crc kubenswrapper[5031]: I0129 09:01:00.949616 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494621-vw7kq"] Jan 29 09:01:00 crc kubenswrapper[5031]: W0129 09:01:00.956966 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ee4f52_58c1_4e47_b074_1f2a379b5eb2.slice/crio-41929ea86859eed71c551daeaa5960672bbfbc27c9eb522232f90b3fd9c9e880 WatchSource:0}: Error finding container 41929ea86859eed71c551daeaa5960672bbfbc27c9eb522232f90b3fd9c9e880: Status 404 returned error can't find the container with id 41929ea86859eed71c551daeaa5960672bbfbc27c9eb522232f90b3fd9c9e880 Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.458002 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-vw7kq" event={"ID":"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2","Type":"ContainerStarted","Data":"ed065145b9e755ca88339f93807ba10c65f33bafb037f314a4e4083f157697f2"} Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.458358 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-vw7kq" event={"ID":"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2","Type":"ContainerStarted","Data":"41929ea86859eed71c551daeaa5960672bbfbc27c9eb522232f90b3fd9c9e880"} Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.491581 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29494621-vw7kq" podStartSLOduration=1.491563666 podStartE2EDuration="1.491563666s" podCreationTimestamp="2026-01-29 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:01.482879843 +0000 UTC m=+1341.982467795" watchObservedRunningTime="2026-01-29 09:01:01.491563666 +0000 UTC m=+1341.991151618" Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.816007 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.911777 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-combined-ca-bundle\") pod \"c240bea9-22e4-4a3c-8237-0d09838c72d9\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.912066 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-scripts\") pod \"c240bea9-22e4-4a3c-8237-0d09838c72d9\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.912089 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4bc\" (UniqueName: \"kubernetes.io/projected/c240bea9-22e4-4a3c-8237-0d09838c72d9-kube-api-access-vp4bc\") pod \"c240bea9-22e4-4a3c-8237-0d09838c72d9\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.912210 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-config-data\") pod \"c240bea9-22e4-4a3c-8237-0d09838c72d9\" (UID: \"c240bea9-22e4-4a3c-8237-0d09838c72d9\") " Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.917490 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c240bea9-22e4-4a3c-8237-0d09838c72d9-kube-api-access-vp4bc" (OuterVolumeSpecName: "kube-api-access-vp4bc") pod "c240bea9-22e4-4a3c-8237-0d09838c72d9" (UID: "c240bea9-22e4-4a3c-8237-0d09838c72d9"). InnerVolumeSpecName "kube-api-access-vp4bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.917551 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-scripts" (OuterVolumeSpecName: "scripts") pod "c240bea9-22e4-4a3c-8237-0d09838c72d9" (UID: "c240bea9-22e4-4a3c-8237-0d09838c72d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.940736 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c240bea9-22e4-4a3c-8237-0d09838c72d9" (UID: "c240bea9-22e4-4a3c-8237-0d09838c72d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:01 crc kubenswrapper[5031]: I0129 09:01:01.949491 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-config-data" (OuterVolumeSpecName: "config-data") pod "c240bea9-22e4-4a3c-8237-0d09838c72d9" (UID: "c240bea9-22e4-4a3c-8237-0d09838c72d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.015022 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.015066 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4bc\" (UniqueName: \"kubernetes.io/projected/c240bea9-22e4-4a3c-8237-0d09838c72d9-kube-api-access-vp4bc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.015084 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.015101 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240bea9-22e4-4a3c-8237-0d09838c72d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.466023 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.466044 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jr7x7" event={"ID":"c240bea9-22e4-4a3c-8237-0d09838c72d9","Type":"ContainerDied","Data":"f69e12f0f6f6a4227503397dbf0668196cb533e7f169a80e6043ae5b8ae0efef"} Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.466088 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69e12f0f6f6a4227503397dbf0668196cb533e7f169a80e6043ae5b8ae0efef" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.615799 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:02 crc kubenswrapper[5031]: E0129 09:01:02.616305 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c240bea9-22e4-4a3c-8237-0d09838c72d9" containerName="nova-cell0-conductor-db-sync" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.616328 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c240bea9-22e4-4a3c-8237-0d09838c72d9" containerName="nova-cell0-conductor-db-sync" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.616691 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="c240bea9-22e4-4a3c-8237-0d09838c72d9" containerName="nova-cell0-conductor-db-sync" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.617427 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.619885 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pmvxd" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.620124 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.636086 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.727893 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.728293 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.728793 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqsd\" (UniqueName: \"kubernetes.io/projected/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-kube-api-access-ztqsd\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.830565 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.830943 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.831107 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqsd\" (UniqueName: \"kubernetes.io/projected/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-kube-api-access-ztqsd\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.837441 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.837962 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.853001 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqsd\" (UniqueName: \"kubernetes.io/projected/49fa8048-1d04-42bc-8e37-b6b40e7e5ece-kube-api-access-ztqsd\") pod \"nova-cell0-conductor-0\" (UID: \"49fa8048-1d04-42bc-8e37-b6b40e7e5ece\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:02 crc kubenswrapper[5031]: I0129 09:01:02.938438 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:03 crc kubenswrapper[5031]: I0129 09:01:03.403790 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:03 crc kubenswrapper[5031]: I0129 09:01:03.474877 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"49fa8048-1d04-42bc-8e37-b6b40e7e5ece","Type":"ContainerStarted","Data":"32487121c22ccfeeecb44d2ad8ae58250a1879f4547e8fd1e44549e2ebd0c55e"} Jan 29 09:01:03 crc kubenswrapper[5031]: I0129 09:01:03.578733 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:01:04 crc kubenswrapper[5031]: I0129 09:01:04.488046 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"49fa8048-1d04-42bc-8e37-b6b40e7e5ece","Type":"ContainerStarted","Data":"4625e89b5d6400b40ec4659c43a5c9e2fdc1776234ee4146c8e2ec29b490a6a5"} Jan 29 09:01:04 crc kubenswrapper[5031]: I0129 09:01:04.488641 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:04 crc kubenswrapper[5031]: I0129 09:01:04.490028 5031 generic.go:334] "Generic (PLEG): container finished" podID="d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" containerID="ed065145b9e755ca88339f93807ba10c65f33bafb037f314a4e4083f157697f2" exitCode=0 Jan 29 09:01:04 crc kubenswrapper[5031]: I0129 09:01:04.490088 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-vw7kq" event={"ID":"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2","Type":"ContainerDied","Data":"ed065145b9e755ca88339f93807ba10c65f33bafb037f314a4e4083f157697f2"} Jan 29 09:01:04 crc kubenswrapper[5031]: I0129 09:01:04.512796 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.512763125 podStartE2EDuration="2.512763125s" podCreationTimestamp="2026-01-29 09:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:04.504664168 +0000 UTC m=+1345.004252120" watchObservedRunningTime="2026-01-29 09:01:04.512763125 +0000 UTC m=+1345.012351077" Jan 29 09:01:05 crc kubenswrapper[5031]: I0129 09:01:05.891722 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:05 crc kubenswrapper[5031]: I0129 09:01:05.998657 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-config-data\") pod \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " Jan 29 09:01:05 crc kubenswrapper[5031]: I0129 09:01:05.998889 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-combined-ca-bundle\") pod \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " Jan 29 09:01:05 crc kubenswrapper[5031]: I0129 09:01:05.999090 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsd8v\" (UniqueName: \"kubernetes.io/projected/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-kube-api-access-tsd8v\") pod \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " Jan 29 09:01:05 crc kubenswrapper[5031]: I0129 09:01:05.999135 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-fernet-keys\") pod \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\" (UID: \"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2\") " Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.004943 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-kube-api-access-tsd8v" (OuterVolumeSpecName: "kube-api-access-tsd8v") pod "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" (UID: "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2"). InnerVolumeSpecName "kube-api-access-tsd8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.005136 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" (UID: "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.030561 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" (UID: "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.055983 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-config-data" (OuterVolumeSpecName: "config-data") pod "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" (UID: "d3ee4f52-58c1-4e47-b074-1f2a379b5eb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.101404 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.101454 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsd8v\" (UniqueName: \"kubernetes.io/projected/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-kube-api-access-tsd8v\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.101470 5031 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.101479 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ee4f52-58c1-4e47-b074-1f2a379b5eb2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.518631 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-vw7kq" event={"ID":"d3ee4f52-58c1-4e47-b074-1f2a379b5eb2","Type":"ContainerDied","Data":"41929ea86859eed71c551daeaa5960672bbfbc27c9eb522232f90b3fd9c9e880"} Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.518677 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41929ea86859eed71c551daeaa5960672bbfbc27c9eb522232f90b3fd9c9e880" Jan 29 09:01:06 crc kubenswrapper[5031]: I0129 09:01:06.518709 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-vw7kq" Jan 29 09:01:08 crc kubenswrapper[5031]: I0129 09:01:08.493519 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:01:08 crc kubenswrapper[5031]: I0129 09:01:08.493905 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:01:12 crc kubenswrapper[5031]: I0129 09:01:12.970830 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.571055 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wkpdv"] Jan 29 09:01:13 crc kubenswrapper[5031]: E0129 09:01:13.571780 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" containerName="keystone-cron" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.571801 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" containerName="keystone-cron" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.571986 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ee4f52-58c1-4e47-b074-1f2a379b5eb2" containerName="keystone-cron" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.572575 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.575138 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.575384 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.584693 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wkpdv"] Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.664761 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-config-data\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.664813 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnxx\" (UniqueName: \"kubernetes.io/projected/be1291d1-c499-4e5b-8aa3-3547c546502c-kube-api-access-2cnxx\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.664931 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-scripts\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.665031 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.766724 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-scripts\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.766860 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.766918 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-config-data\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.766938 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnxx\" (UniqueName: \"kubernetes.io/projected/be1291d1-c499-4e5b-8aa3-3547c546502c-kube-api-access-2cnxx\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.773621 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.781131 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-config-data\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.801938 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-scripts\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.821993 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnxx\" (UniqueName: \"kubernetes.io/projected/be1291d1-c499-4e5b-8aa3-3547c546502c-kube-api-access-2cnxx\") pod \"nova-cell0-cell-mapping-wkpdv\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.902119 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.902830 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.912044 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.916301 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:01:13 crc kubenswrapper[5031]: I0129 09:01:13.954667 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.022094 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.029672 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.040485 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.057134 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.059311 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.062970 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.087529 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.089253 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.094684 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.094769 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cf4471-0a46-4b2c-ba06-f17ed494c626-logs\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.094797 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-config-data\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.094842 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg6l7\" (UniqueName: \"kubernetes.io/projected/52cf4471-0a46-4b2c-ba06-f17ed494c626-kube-api-access-tg6l7\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.098797 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.115953 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.179170 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.196965 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197289 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dd19394-810f-45d4-b102-ed93e67889bf-logs\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197320 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh6f\" (UniqueName: \"kubernetes.io/projected/35f621af-6032-4595-b8d6-35af999c21b5-kube-api-access-mzh6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197357 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197416 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-config-data\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197436 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srtdr\" (UniqueName: \"kubernetes.io/projected/9dd19394-810f-45d4-b102-ed93e67889bf-kube-api-access-srtdr\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197477 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cf4471-0a46-4b2c-ba06-f17ed494c626-logs\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197501 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdcxp\" (UniqueName: \"kubernetes.io/projected/93653dea-976b-4b7e-8735-679a21ddd8c9-kube-api-access-fdcxp\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197528 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-config-data\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197553 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197580 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197625 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg6l7\" (UniqueName: \"kubernetes.io/projected/52cf4471-0a46-4b2c-ba06-f17ed494c626-kube-api-access-tg6l7\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.197650 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-config-data\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.198074 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cf4471-0a46-4b2c-ba06-f17ed494c626-logs\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.198257 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.203014 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.209316 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-config-data\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.219513 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.227060 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg6l7\" (UniqueName: \"kubernetes.io/projected/52cf4471-0a46-4b2c-ba06-f17ed494c626-kube-api-access-tg6l7\") pod \"nova-api-0\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.242085 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-p92v8"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.244114 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.254215 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-p92v8"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.255183 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300085 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-config-data\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300150 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300184 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300237 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dd19394-810f-45d4-b102-ed93e67889bf-logs\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300256 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh6f\" (UniqueName: \"kubernetes.io/projected/35f621af-6032-4595-b8d6-35af999c21b5-kube-api-access-mzh6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300317 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-config-data\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300346 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srtdr\" (UniqueName: \"kubernetes.io/projected/9dd19394-810f-45d4-b102-ed93e67889bf-kube-api-access-srtdr\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300412 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdcxp\" (UniqueName: \"kubernetes.io/projected/93653dea-976b-4b7e-8735-679a21ddd8c9-kube-api-access-fdcxp\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300441 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300474 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.300948 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dd19394-810f-45d4-b102-ed93e67889bf-logs\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.307887 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-config-data\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.308189 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.325594 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-config-data\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.326042 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.328692 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.329856 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.340555 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdcxp\" (UniqueName: \"kubernetes.io/projected/93653dea-976b-4b7e-8735-679a21ddd8c9-kube-api-access-fdcxp\") pod \"nova-scheduler-0\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.344544 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh6f\" (UniqueName: \"kubernetes.io/projected/35f621af-6032-4595-b8d6-35af999c21b5-kube-api-access-mzh6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.346058 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srtdr\" (UniqueName: \"kubernetes.io/projected/9dd19394-810f-45d4-b102-ed93e67889bf-kube-api-access-srtdr\") pod \"nova-metadata-0\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.404090 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-config\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.404177 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmplp\" (UniqueName: \"kubernetes.io/projected/86f4cc28-e60d-4c01-811a-b4a200372cfa-kube-api-access-tmplp\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.404310 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.404347 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.404427 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.415653 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.432304 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.461580 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.506279 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.506682 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.506723 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.506797 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-config\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.506841 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmplp\" (UniqueName: \"kubernetes.io/projected/86f4cc28-e60d-4c01-811a-b4a200372cfa-kube-api-access-tmplp\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.508326 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.509866 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-config\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.510852 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.511222 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.528936 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmplp\" (UniqueName: \"kubernetes.io/projected/86f4cc28-e60d-4c01-811a-b4a200372cfa-kube-api-access-tmplp\") pod \"dnsmasq-dns-8b8cf6657-p92v8\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.569931 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.631541 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wkpdv"] Jan 29 09:01:14 crc kubenswrapper[5031]: W0129 09:01:14.632314 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe1291d1_c499_4e5b_8aa3_3547c546502c.slice/crio-39b688c9e67d63742eb45c3f87a33e4f3aff7db5012e7bd0161b8a0a6b1aced2 WatchSource:0}: Error finding container 39b688c9e67d63742eb45c3f87a33e4f3aff7db5012e7bd0161b8a0a6b1aced2: Status 404 returned error can't find the container with id 39b688c9e67d63742eb45c3f87a33e4f3aff7db5012e7bd0161b8a0a6b1aced2 Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.718943 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z4wbf"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.720220 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.726029 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.726827 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.728142 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z4wbf"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.779236 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.821188 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwlf\" (UniqueName: \"kubernetes.io/projected/300251ab-347d-4865-9f56-417ae1fc962e-kube-api-access-hkwlf\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.821708 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.821742 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-config-data\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.821777 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-scripts\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.923730 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.923776 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-config-data\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.923810 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-scripts\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.923884 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkwlf\" (UniqueName: \"kubernetes.io/projected/300251ab-347d-4865-9f56-417ae1fc962e-kube-api-access-hkwlf\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.931263 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-scripts\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.931269 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.937108 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-config-data\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:14 crc kubenswrapper[5031]: I0129 09:01:14.945282 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkwlf\" (UniqueName: \"kubernetes.io/projected/300251ab-347d-4865-9f56-417ae1fc962e-kube-api-access-hkwlf\") pod \"nova-cell1-conductor-db-sync-z4wbf\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.065632 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.078427 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.104084 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.218728 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-p92v8"] Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.239956 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:15 crc kubenswrapper[5031]: W0129 09:01:15.241011 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f4cc28_e60d_4c01_811a_b4a200372cfa.slice/crio-ea2c9caccdcdc3712b7f27808123072411e6ec1ec53972292291b389bbe80d4b WatchSource:0}: Error finding container ea2c9caccdcdc3712b7f27808123072411e6ec1ec53972292291b389bbe80d4b: Status 404 returned error can't find the container with id ea2c9caccdcdc3712b7f27808123072411e6ec1ec53972292291b389bbe80d4b Jan 29 09:01:15 crc kubenswrapper[5031]: W0129 09:01:15.245569 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dd19394_810f_45d4_b102_ed93e67889bf.slice/crio-cdec024977e751a8ffeb6c75e24b0374610921a8248ccd85b1472b78191adbf1 WatchSource:0}: Error finding container cdec024977e751a8ffeb6c75e24b0374610921a8248ccd85b1472b78191adbf1: Status 404 returned error can't find the container with id cdec024977e751a8ffeb6c75e24b0374610921a8248ccd85b1472b78191adbf1 Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.595997 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52cf4471-0a46-4b2c-ba06-f17ed494c626","Type":"ContainerStarted","Data":"d9e292699d77176afd3ba950c5d94dd5603ebae653fe0480a5933f65cc4e7c78"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.599129 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wkpdv" event={"ID":"be1291d1-c499-4e5b-8aa3-3547c546502c","Type":"ContainerStarted","Data":"af21c6b15968681356856bc614dd09edbe62a701468d2fff395ea3f613b05a2e"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.599204 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wkpdv" event={"ID":"be1291d1-c499-4e5b-8aa3-3547c546502c","Type":"ContainerStarted","Data":"39b688c9e67d63742eb45c3f87a33e4f3aff7db5012e7bd0161b8a0a6b1aced2"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.605865 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" event={"ID":"86f4cc28-e60d-4c01-811a-b4a200372cfa","Type":"ContainerStarted","Data":"5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.614930 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" event={"ID":"86f4cc28-e60d-4c01-811a-b4a200372cfa","Type":"ContainerStarted","Data":"ea2c9caccdcdc3712b7f27808123072411e6ec1ec53972292291b389bbe80d4b"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.632398 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93653dea-976b-4b7e-8735-679a21ddd8c9","Type":"ContainerStarted","Data":"d6586ca240ed353d878b7fe9747d733886f089c800b7e465988b24a67befa696"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.653179 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dd19394-810f-45d4-b102-ed93e67889bf","Type":"ContainerStarted","Data":"cdec024977e751a8ffeb6c75e24b0374610921a8248ccd85b1472b78191adbf1"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.659024 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35f621af-6032-4595-b8d6-35af999c21b5","Type":"ContainerStarted","Data":"0918ab7e493683221ceb8566e6cefc95eeaea80c28eff0eaf5f8477e19b078b8"} Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.676166 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wkpdv" podStartSLOduration=2.676142443 podStartE2EDuration="2.676142443s" podCreationTimestamp="2026-01-29 09:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:15.635947294 +0000 UTC m=+1356.135535267" watchObservedRunningTime="2026-01-29 09:01:15.676142443 +0000 UTC m=+1356.175730395" Jan 29 09:01:15 crc kubenswrapper[5031]: I0129 09:01:15.691321 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z4wbf"] Jan 29 09:01:16 crc kubenswrapper[5031]: I0129 09:01:16.676675 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" event={"ID":"300251ab-347d-4865-9f56-417ae1fc962e","Type":"ContainerStarted","Data":"0cb106eb3119c6fb35f1cd1ec00a1ef07eb2ec7bc394ec2d35e83d475144b7e4"} Jan 29 09:01:16 crc kubenswrapper[5031]: I0129 09:01:16.677067 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" event={"ID":"300251ab-347d-4865-9f56-417ae1fc962e","Type":"ContainerStarted","Data":"6e2e12913c235e550cfef28d986fc17124d1f92138dc3b589f25bdd564594089"} Jan 29 09:01:16 crc kubenswrapper[5031]: I0129 09:01:16.682588 5031 generic.go:334] "Generic (PLEG): container finished" podID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerID="5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567" exitCode=0 Jan 29 09:01:16 crc kubenswrapper[5031]: I0129 09:01:16.682638 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" event={"ID":"86f4cc28-e60d-4c01-811a-b4a200372cfa","Type":"ContainerDied","Data":"5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567"} Jan 29 09:01:16 crc kubenswrapper[5031]: I0129 09:01:16.714360 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" podStartSLOduration=2.714340885 podStartE2EDuration="2.714340885s" podCreationTimestamp="2026-01-29 09:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:16.699909998 +0000 UTC m=+1357.199497940" watchObservedRunningTime="2026-01-29 09:01:16.714340885 +0000 UTC m=+1357.213928837" Jan 29 09:01:17 crc kubenswrapper[5031]: I0129 09:01:17.630510 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:17 crc kubenswrapper[5031]: I0129 09:01:17.708649 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.715491 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35f621af-6032-4595-b8d6-35af999c21b5","Type":"ContainerStarted","Data":"bdbde1af0deb68734a82d570d681d6d66b939ea269ef9332a082762330fb319b"} Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.715597 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="35f621af-6032-4595-b8d6-35af999c21b5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bdbde1af0deb68734a82d570d681d6d66b939ea269ef9332a082762330fb319b" gracePeriod=30 Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.717561 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52cf4471-0a46-4b2c-ba06-f17ed494c626","Type":"ContainerStarted","Data":"e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108"} Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.717602 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52cf4471-0a46-4b2c-ba06-f17ed494c626","Type":"ContainerStarted","Data":"109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54"} Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.721219 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" event={"ID":"86f4cc28-e60d-4c01-811a-b4a200372cfa","Type":"ContainerStarted","Data":"018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004"} Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.721996 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.723961 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93653dea-976b-4b7e-8735-679a21ddd8c9","Type":"ContainerStarted","Data":"a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3"} Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.726474 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dd19394-810f-45d4-b102-ed93e67889bf","Type":"ContainerStarted","Data":"b610e1ae449e2c4b20df1766020fadb8bdb506beb4f36d333b65b9c050a9637e"} Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.726512 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dd19394-810f-45d4-b102-ed93e67889bf","Type":"ContainerStarted","Data":"eaf0b431b8240f184645e2883f823b6a623fa4c604261c38a29888f30bf8e5fa"} Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.726854 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-log" containerID="cri-o://eaf0b431b8240f184645e2883f823b6a623fa4c604261c38a29888f30bf8e5fa" gracePeriod=30 Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.726986 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-metadata" containerID="cri-o://b610e1ae449e2c4b20df1766020fadb8bdb506beb4f36d333b65b9c050a9637e" gracePeriod=30 Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.744161 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.102960217 podStartE2EDuration="6.744139615s" podCreationTimestamp="2026-01-29 09:01:13 +0000 UTC" firstStartedPulling="2026-01-29 09:01:15.1282893 +0000 UTC m=+1355.627877242" lastFinishedPulling="2026-01-29 09:01:18.769468688 +0000 UTC m=+1359.269056640" observedRunningTime="2026-01-29 09:01:19.739418988 +0000 UTC m=+1360.239006960" watchObservedRunningTime="2026-01-29 09:01:19.744139615 +0000 UTC m=+1360.243727577" Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.763558 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.825336186 podStartE2EDuration="6.763537395s" podCreationTimestamp="2026-01-29 09:01:13 +0000 UTC" firstStartedPulling="2026-01-29 09:01:14.79180606 +0000 UTC m=+1355.291394012" lastFinishedPulling="2026-01-29 09:01:18.730007269 +0000 UTC m=+1359.229595221" observedRunningTime="2026-01-29 09:01:19.757940975 +0000 UTC m=+1360.257528937" watchObservedRunningTime="2026-01-29 09:01:19.763537395 +0000 UTC m=+1360.263125347" Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.786521 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.133205549 podStartE2EDuration="6.786503531s" podCreationTimestamp="2026-01-29 09:01:13 +0000 UTC" firstStartedPulling="2026-01-29 09:01:15.11672658 +0000 UTC m=+1355.616314532" lastFinishedPulling="2026-01-29 09:01:18.770024552 +0000 UTC m=+1359.269612514" observedRunningTime="2026-01-29 09:01:19.782873444 +0000 UTC m=+1360.282461396" watchObservedRunningTime="2026-01-29 09:01:19.786503531 +0000 UTC m=+1360.286091483" Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.803871 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.333559275 podStartE2EDuration="6.803851317s" podCreationTimestamp="2026-01-29 09:01:13 +0000 UTC" firstStartedPulling="2026-01-29 09:01:15.256868501 +0000 UTC m=+1355.756456453" lastFinishedPulling="2026-01-29 09:01:18.727160543 +0000 UTC m=+1359.226748495" observedRunningTime="2026-01-29 09:01:19.801554665 +0000 UTC m=+1360.301142627" watchObservedRunningTime="2026-01-29 09:01:19.803851317 +0000 UTC m=+1360.303439269" Jan 29 09:01:19 crc kubenswrapper[5031]: I0129 09:01:19.830977 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" podStartSLOduration=5.830959325 podStartE2EDuration="5.830959325s" podCreationTimestamp="2026-01-29 09:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:19.825786905 +0000 UTC m=+1360.325374857" watchObservedRunningTime="2026-01-29 09:01:19.830959325 +0000 UTC m=+1360.330547267" Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.737607 5031 generic.go:334] "Generic (PLEG): container finished" podID="9dd19394-810f-45d4-b102-ed93e67889bf" containerID="b610e1ae449e2c4b20df1766020fadb8bdb506beb4f36d333b65b9c050a9637e" exitCode=0 Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.737968 5031 generic.go:334] "Generic (PLEG): container finished" podID="9dd19394-810f-45d4-b102-ed93e67889bf" containerID="eaf0b431b8240f184645e2883f823b6a623fa4c604261c38a29888f30bf8e5fa" exitCode=143 Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.737696 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dd19394-810f-45d4-b102-ed93e67889bf","Type":"ContainerDied","Data":"b610e1ae449e2c4b20df1766020fadb8bdb506beb4f36d333b65b9c050a9637e"} Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.738012 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dd19394-810f-45d4-b102-ed93e67889bf","Type":"ContainerDied","Data":"eaf0b431b8240f184645e2883f823b6a623fa4c604261c38a29888f30bf8e5fa"} Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.851009 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.962180 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srtdr\" (UniqueName: \"kubernetes.io/projected/9dd19394-810f-45d4-b102-ed93e67889bf-kube-api-access-srtdr\") pod \"9dd19394-810f-45d4-b102-ed93e67889bf\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.962234 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-combined-ca-bundle\") pod \"9dd19394-810f-45d4-b102-ed93e67889bf\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.962276 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-config-data\") pod \"9dd19394-810f-45d4-b102-ed93e67889bf\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.962331 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dd19394-810f-45d4-b102-ed93e67889bf-logs\") pod \"9dd19394-810f-45d4-b102-ed93e67889bf\" (UID: \"9dd19394-810f-45d4-b102-ed93e67889bf\") " Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.963742 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dd19394-810f-45d4-b102-ed93e67889bf-logs" (OuterVolumeSpecName: "logs") pod "9dd19394-810f-45d4-b102-ed93e67889bf" (UID: "9dd19394-810f-45d4-b102-ed93e67889bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[5031]: I0129 09:01:20.969200 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd19394-810f-45d4-b102-ed93e67889bf-kube-api-access-srtdr" (OuterVolumeSpecName: "kube-api-access-srtdr") pod "9dd19394-810f-45d4-b102-ed93e67889bf" (UID: "9dd19394-810f-45d4-b102-ed93e67889bf"). InnerVolumeSpecName "kube-api-access-srtdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.001449 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-config-data" (OuterVolumeSpecName: "config-data") pod "9dd19394-810f-45d4-b102-ed93e67889bf" (UID: "9dd19394-810f-45d4-b102-ed93e67889bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.012728 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dd19394-810f-45d4-b102-ed93e67889bf" (UID: "9dd19394-810f-45d4-b102-ed93e67889bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.064739 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.064776 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dd19394-810f-45d4-b102-ed93e67889bf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.064795 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srtdr\" (UniqueName: \"kubernetes.io/projected/9dd19394-810f-45d4-b102-ed93e67889bf-kube-api-access-srtdr\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.064807 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd19394-810f-45d4-b102-ed93e67889bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.749206 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.749394 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dd19394-810f-45d4-b102-ed93e67889bf","Type":"ContainerDied","Data":"cdec024977e751a8ffeb6c75e24b0374610921a8248ccd85b1472b78191adbf1"} Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.750738 5031 scope.go:117] "RemoveContainer" containerID="b610e1ae449e2c4b20df1766020fadb8bdb506beb4f36d333b65b9c050a9637e" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.779111 5031 scope.go:117] "RemoveContainer" containerID="eaf0b431b8240f184645e2883f823b6a623fa4c604261c38a29888f30bf8e5fa" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.789589 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.798863 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.814106 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:21 crc kubenswrapper[5031]: E0129 09:01:21.814556 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-log" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.814592 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-log" Jan 29 09:01:21 crc kubenswrapper[5031]: E0129 09:01:21.814615 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-metadata" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.814622 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-metadata" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.814844 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-log" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.814883 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" containerName="nova-metadata-metadata" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.816086 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.818414 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.818557 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.829688 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.886537 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tv24\" (UniqueName: \"kubernetes.io/projected/79214f48-df14-4431-a10b-8bfee7c0daac-kube-api-access-2tv24\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.886595 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-config-data\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.886671 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79214f48-df14-4431-a10b-8bfee7c0daac-logs\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.886690 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.886743 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.988646 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79214f48-df14-4431-a10b-8bfee7c0daac-logs\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.988703 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.988790 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.988882 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tv24\" (UniqueName: \"kubernetes.io/projected/79214f48-df14-4431-a10b-8bfee7c0daac-kube-api-access-2tv24\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.988922 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-config-data\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.989716 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79214f48-df14-4431-a10b-8bfee7c0daac-logs\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.995033 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:21 crc kubenswrapper[5031]: I0129 09:01:21.995334 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:22 crc kubenswrapper[5031]: I0129 09:01:22.008761 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tv24\" (UniqueName: \"kubernetes.io/projected/79214f48-df14-4431-a10b-8bfee7c0daac-kube-api-access-2tv24\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:22 crc kubenswrapper[5031]: I0129 09:01:22.010063 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-config-data\") pod \"nova-metadata-0\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " pod="openstack/nova-metadata-0" Jan 29 09:01:22 crc kubenswrapper[5031]: I0129 09:01:22.138237 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:22 crc kubenswrapper[5031]: I0129 09:01:22.292948 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dd19394-810f-45d4-b102-ed93e67889bf" path="/var/lib/kubelet/pods/9dd19394-810f-45d4-b102-ed93e67889bf/volumes" Jan 29 09:01:22 crc kubenswrapper[5031]: I0129 09:01:22.581202 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:22 crc kubenswrapper[5031]: W0129 09:01:22.581747 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79214f48_df14_4431_a10b_8bfee7c0daac.slice/crio-9e85fff61acbaaa5a7805ccc2c4cbe1c5eada95545591c34e6dd5c2df53c6442 WatchSource:0}: Error finding container 9e85fff61acbaaa5a7805ccc2c4cbe1c5eada95545591c34e6dd5c2df53c6442: Status 404 returned error can't find the container with id 9e85fff61acbaaa5a7805ccc2c4cbe1c5eada95545591c34e6dd5c2df53c6442 Jan 29 09:01:22 crc kubenswrapper[5031]: I0129 09:01:22.763732 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79214f48-df14-4431-a10b-8bfee7c0daac","Type":"ContainerStarted","Data":"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5"} Jan 29 09:01:22 crc kubenswrapper[5031]: I0129 09:01:22.764053 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79214f48-df14-4431-a10b-8bfee7c0daac","Type":"ContainerStarted","Data":"9e85fff61acbaaa5a7805ccc2c4cbe1c5eada95545591c34e6dd5c2df53c6442"} Jan 29 09:01:23 crc kubenswrapper[5031]: I0129 09:01:23.775765 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79214f48-df14-4431-a10b-8bfee7c0daac","Type":"ContainerStarted","Data":"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e"} Jan 29 09:01:23 crc kubenswrapper[5031]: I0129 09:01:23.800256 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.800232126 podStartE2EDuration="2.800232126s" podCreationTimestamp="2026-01-29 09:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:23.796079303 +0000 UTC m=+1364.295667255" watchObservedRunningTime="2026-01-29 09:01:23.800232126 +0000 UTC m=+1364.299820068" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.256297 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.256348 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.420313 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.420373 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.450632 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.462184 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.572166 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.714047 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-9g9jl"] Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.714312 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" podUID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerName="dnsmasq-dns" containerID="cri-o://7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1" gracePeriod=10 Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.804819 5031 generic.go:334] "Generic (PLEG): container finished" podID="be1291d1-c499-4e5b-8aa3-3547c546502c" containerID="af21c6b15968681356856bc614dd09edbe62a701468d2fff395ea3f613b05a2e" exitCode=0 Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.805944 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wkpdv" event={"ID":"be1291d1-c499-4e5b-8aa3-3547c546502c","Type":"ContainerDied","Data":"af21c6b15968681356856bc614dd09edbe62a701468d2fff395ea3f613b05a2e"} Jan 29 09:01:24 crc kubenswrapper[5031]: I0129 09:01:24.889171 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.340630 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.172:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.340891 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.172:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.341882 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.488756 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-dns-svc\") pod \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.488798 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-sb\") pod \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.488926 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-config\") pod \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.489055 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvrp9\" (UniqueName: \"kubernetes.io/projected/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-kube-api-access-qvrp9\") pod \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.489074 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-nb\") pod \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\" (UID: \"dabf38f1-9d5a-48fc-a84c-b97c108e4a36\") " Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.499785 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-kube-api-access-qvrp9" (OuterVolumeSpecName: "kube-api-access-qvrp9") pod "dabf38f1-9d5a-48fc-a84c-b97c108e4a36" (UID: "dabf38f1-9d5a-48fc-a84c-b97c108e4a36"). InnerVolumeSpecName "kube-api-access-qvrp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.549564 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dabf38f1-9d5a-48fc-a84c-b97c108e4a36" (UID: "dabf38f1-9d5a-48fc-a84c-b97c108e4a36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.549576 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dabf38f1-9d5a-48fc-a84c-b97c108e4a36" (UID: "dabf38f1-9d5a-48fc-a84c-b97c108e4a36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.561965 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-config" (OuterVolumeSpecName: "config") pod "dabf38f1-9d5a-48fc-a84c-b97c108e4a36" (UID: "dabf38f1-9d5a-48fc-a84c-b97c108e4a36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.571741 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dabf38f1-9d5a-48fc-a84c-b97c108e4a36" (UID: "dabf38f1-9d5a-48fc-a84c-b97c108e4a36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.591296 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.591339 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.591361 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.591386 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.591397 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvrp9\" (UniqueName: \"kubernetes.io/projected/dabf38f1-9d5a-48fc-a84c-b97c108e4a36-kube-api-access-qvrp9\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.815678 5031 generic.go:334] "Generic (PLEG): container finished" podID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerID="7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1" exitCode=0 Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.815741 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" event={"ID":"dabf38f1-9d5a-48fc-a84c-b97c108e4a36","Type":"ContainerDied","Data":"7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1"} Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.815768 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" event={"ID":"dabf38f1-9d5a-48fc-a84c-b97c108e4a36","Type":"ContainerDied","Data":"eaee5adecf4269fa7ab157f93a02e7440494ac2800d01eb54a2315da0d6c595d"} Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.815789 5031 scope.go:117] "RemoveContainer" containerID="7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.815906 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-9g9jl" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.828006 5031 generic.go:334] "Generic (PLEG): container finished" podID="300251ab-347d-4865-9f56-417ae1fc962e" containerID="0cb106eb3119c6fb35f1cd1ec00a1ef07eb2ec7bc394ec2d35e83d475144b7e4" exitCode=0 Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.828356 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" event={"ID":"300251ab-347d-4865-9f56-417ae1fc962e","Type":"ContainerDied","Data":"0cb106eb3119c6fb35f1cd1ec00a1ef07eb2ec7bc394ec2d35e83d475144b7e4"} Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.906034 5031 scope.go:117] "RemoveContainer" containerID="3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.910662 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-9g9jl"] Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.933487 5031 scope.go:117] "RemoveContainer" containerID="7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1" Jan 29 09:01:25 crc kubenswrapper[5031]: E0129 09:01:25.934040 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1\": container with ID starting with 7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1 not found: ID does not exist" containerID="7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.934096 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1"} err="failed to get container status \"7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1\": rpc error: code = NotFound desc = could not find container \"7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1\": container with ID starting with 7d19f2645a208f2761c134efc8a148dcbbe6e16174a014c3550cc61491343ce1 not found: ID does not exist" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.934132 5031 scope.go:117] "RemoveContainer" containerID="3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e" Jan 29 09:01:25 crc kubenswrapper[5031]: E0129 09:01:25.934483 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e\": container with ID starting with 3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e not found: ID does not exist" containerID="3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.934512 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e"} err="failed to get container status \"3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e\": rpc error: code = NotFound desc = could not find container \"3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e\": container with ID starting with 3d8a1de8c2828d6b24b15f4d27bf48567e949a4d4059078742f13c37319b8a8e not found: ID does not exist" Jan 29 09:01:25 crc kubenswrapper[5031]: I0129 09:01:25.937004 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-9g9jl"] Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.229893 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.293430 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" path="/var/lib/kubelet/pods/dabf38f1-9d5a-48fc-a84c-b97c108e4a36/volumes" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.404994 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cnxx\" (UniqueName: \"kubernetes.io/projected/be1291d1-c499-4e5b-8aa3-3547c546502c-kube-api-access-2cnxx\") pod \"be1291d1-c499-4e5b-8aa3-3547c546502c\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.405120 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-config-data\") pod \"be1291d1-c499-4e5b-8aa3-3547c546502c\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.405222 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-scripts\") pod \"be1291d1-c499-4e5b-8aa3-3547c546502c\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.405306 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-combined-ca-bundle\") pod \"be1291d1-c499-4e5b-8aa3-3547c546502c\" (UID: \"be1291d1-c499-4e5b-8aa3-3547c546502c\") " Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.409805 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-scripts" (OuterVolumeSpecName: "scripts") pod "be1291d1-c499-4e5b-8aa3-3547c546502c" (UID: "be1291d1-c499-4e5b-8aa3-3547c546502c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.411667 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1291d1-c499-4e5b-8aa3-3547c546502c-kube-api-access-2cnxx" (OuterVolumeSpecName: "kube-api-access-2cnxx") pod "be1291d1-c499-4e5b-8aa3-3547c546502c" (UID: "be1291d1-c499-4e5b-8aa3-3547c546502c"). InnerVolumeSpecName "kube-api-access-2cnxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.436032 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be1291d1-c499-4e5b-8aa3-3547c546502c" (UID: "be1291d1-c499-4e5b-8aa3-3547c546502c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.443728 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-config-data" (OuterVolumeSpecName: "config-data") pod "be1291d1-c499-4e5b-8aa3-3547c546502c" (UID: "be1291d1-c499-4e5b-8aa3-3547c546502c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.507739 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cnxx\" (UniqueName: \"kubernetes.io/projected/be1291d1-c499-4e5b-8aa3-3547c546502c-kube-api-access-2cnxx\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.508106 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.508182 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.508237 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1291d1-c499-4e5b-8aa3-3547c546502c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.837837 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wkpdv" event={"ID":"be1291d1-c499-4e5b-8aa3-3547c546502c","Type":"ContainerDied","Data":"39b688c9e67d63742eb45c3f87a33e4f3aff7db5012e7bd0161b8a0a6b1aced2"} Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.839102 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b688c9e67d63742eb45c3f87a33e4f3aff7db5012e7bd0161b8a0a6b1aced2" Jan 29 09:01:26 crc kubenswrapper[5031]: I0129 09:01:26.838072 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wkpdv" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.057348 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.057601 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-log" containerID="cri-o://109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54" gracePeriod=30 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.058017 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-api" containerID="cri-o://e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108" gracePeriod=30 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.078885 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.079112 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="93653dea-976b-4b7e-8735-679a21ddd8c9" containerName="nova-scheduler-scheduler" containerID="cri-o://a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3" gracePeriod=30 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.089443 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.089636 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-log" containerID="cri-o://58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5" gracePeriod=30 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.089760 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-metadata" containerID="cri-o://da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e" gracePeriod=30 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.138254 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.138333 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.263992 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.424965 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-combined-ca-bundle\") pod \"300251ab-347d-4865-9f56-417ae1fc962e\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.425184 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkwlf\" (UniqueName: \"kubernetes.io/projected/300251ab-347d-4865-9f56-417ae1fc962e-kube-api-access-hkwlf\") pod \"300251ab-347d-4865-9f56-417ae1fc962e\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.425339 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-config-data\") pod \"300251ab-347d-4865-9f56-417ae1fc962e\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.425439 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-scripts\") pod \"300251ab-347d-4865-9f56-417ae1fc962e\" (UID: \"300251ab-347d-4865-9f56-417ae1fc962e\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.432411 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-scripts" (OuterVolumeSpecName: "scripts") pod "300251ab-347d-4865-9f56-417ae1fc962e" (UID: "300251ab-347d-4865-9f56-417ae1fc962e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.433034 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300251ab-347d-4865-9f56-417ae1fc962e-kube-api-access-hkwlf" (OuterVolumeSpecName: "kube-api-access-hkwlf") pod "300251ab-347d-4865-9f56-417ae1fc962e" (UID: "300251ab-347d-4865-9f56-417ae1fc962e"). InnerVolumeSpecName "kube-api-access-hkwlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.470511 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "300251ab-347d-4865-9f56-417ae1fc962e" (UID: "300251ab-347d-4865-9f56-417ae1fc962e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.471520 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-config-data" (OuterVolumeSpecName: "config-data") pod "300251ab-347d-4865-9f56-417ae1fc962e" (UID: "300251ab-347d-4865-9f56-417ae1fc962e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.532754 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.532783 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.532791 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300251ab-347d-4865-9f56-417ae1fc962e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.532802 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkwlf\" (UniqueName: \"kubernetes.io/projected/300251ab-347d-4865-9f56-417ae1fc962e-kube-api-access-hkwlf\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.717897 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.843192 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-config-data\") pod \"79214f48-df14-4431-a10b-8bfee7c0daac\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.843337 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-nova-metadata-tls-certs\") pod \"79214f48-df14-4431-a10b-8bfee7c0daac\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.843447 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tv24\" (UniqueName: \"kubernetes.io/projected/79214f48-df14-4431-a10b-8bfee7c0daac-kube-api-access-2tv24\") pod \"79214f48-df14-4431-a10b-8bfee7c0daac\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.843538 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-combined-ca-bundle\") pod \"79214f48-df14-4431-a10b-8bfee7c0daac\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.843573 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79214f48-df14-4431-a10b-8bfee7c0daac-logs\") pod \"79214f48-df14-4431-a10b-8bfee7c0daac\" (UID: \"79214f48-df14-4431-a10b-8bfee7c0daac\") " Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.844335 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79214f48-df14-4431-a10b-8bfee7c0daac-logs" (OuterVolumeSpecName: "logs") pod "79214f48-df14-4431-a10b-8bfee7c0daac" (UID: "79214f48-df14-4431-a10b-8bfee7c0daac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.849925 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79214f48-df14-4431-a10b-8bfee7c0daac-kube-api-access-2tv24" (OuterVolumeSpecName: "kube-api-access-2tv24") pod "79214f48-df14-4431-a10b-8bfee7c0daac" (UID: "79214f48-df14-4431-a10b-8bfee7c0daac"). InnerVolumeSpecName "kube-api-access-2tv24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.856927 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" event={"ID":"300251ab-347d-4865-9f56-417ae1fc962e","Type":"ContainerDied","Data":"6e2e12913c235e550cfef28d986fc17124d1f92138dc3b589f25bdd564594089"} Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.856975 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e2e12913c235e550cfef28d986fc17124d1f92138dc3b589f25bdd564594089" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.857060 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z4wbf" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.875809 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79214f48-df14-4431-a10b-8bfee7c0daac" (UID: "79214f48-df14-4431-a10b-8bfee7c0daac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.876216 5031 generic.go:334] "Generic (PLEG): container finished" podID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerID="109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54" exitCode=143 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.876471 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52cf4471-0a46-4b2c-ba06-f17ed494c626","Type":"ContainerDied","Data":"109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54"} Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.881434 5031 generic.go:334] "Generic (PLEG): container finished" podID="79214f48-df14-4431-a10b-8bfee7c0daac" containerID="da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e" exitCode=0 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.881475 5031 generic.go:334] "Generic (PLEG): container finished" podID="79214f48-df14-4431-a10b-8bfee7c0daac" containerID="58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5" exitCode=143 Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.881497 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79214f48-df14-4431-a10b-8bfee7c0daac","Type":"ContainerDied","Data":"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e"} Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.881524 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79214f48-df14-4431-a10b-8bfee7c0daac","Type":"ContainerDied","Data":"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5"} Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.881533 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79214f48-df14-4431-a10b-8bfee7c0daac","Type":"ContainerDied","Data":"9e85fff61acbaaa5a7805ccc2c4cbe1c5eada95545591c34e6dd5c2df53c6442"} Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.881551 5031 scope.go:117] "RemoveContainer" containerID="da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.881978 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.895822 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-config-data" (OuterVolumeSpecName: "config-data") pod "79214f48-df14-4431-a10b-8bfee7c0daac" (UID: "79214f48-df14-4431-a10b-8bfee7c0daac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.940285 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "79214f48-df14-4431-a10b-8bfee7c0daac" (UID: "79214f48-df14-4431-a10b-8bfee7c0daac"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.943713 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:27 crc kubenswrapper[5031]: E0129 09:01:27.944185 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300251ab-347d-4865-9f56-417ae1fc962e" containerName="nova-cell1-conductor-db-sync" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944207 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="300251ab-347d-4865-9f56-417ae1fc962e" containerName="nova-cell1-conductor-db-sync" Jan 29 09:01:27 crc kubenswrapper[5031]: E0129 09:01:27.944232 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerName="init" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944239 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerName="init" Jan 29 09:01:27 crc kubenswrapper[5031]: E0129 09:01:27.944248 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerName="dnsmasq-dns" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944253 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerName="dnsmasq-dns" Jan 29 09:01:27 crc kubenswrapper[5031]: E0129 09:01:27.944267 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-log" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944274 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-log" Jan 29 09:01:27 crc kubenswrapper[5031]: E0129 09:01:27.944288 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1291d1-c499-4e5b-8aa3-3547c546502c" containerName="nova-manage" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944294 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1291d1-c499-4e5b-8aa3-3547c546502c" containerName="nova-manage" Jan 29 09:01:27 crc kubenswrapper[5031]: E0129 09:01:27.944307 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-metadata" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944315 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-metadata" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944538 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="300251ab-347d-4865-9f56-417ae1fc962e" containerName="nova-cell1-conductor-db-sync" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944587 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-metadata" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944629 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" containerName="nova-metadata-log" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944649 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1291d1-c499-4e5b-8aa3-3547c546502c" containerName="nova-manage" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.944674 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabf38f1-9d5a-48fc-a84c-b97c108e4a36" containerName="dnsmasq-dns" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.945301 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.947254 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tv24\" (UniqueName: \"kubernetes.io/projected/79214f48-df14-4431-a10b-8bfee7c0daac-kube-api-access-2tv24\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.947278 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.947287 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79214f48-df14-4431-a10b-8bfee7c0daac-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.947297 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.947307 5031 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79214f48-df14-4431-a10b-8bfee7c0daac-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.950719 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 09:01:27 crc kubenswrapper[5031]: I0129 09:01:27.952000 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.007582 5031 scope.go:117] "RemoveContainer" containerID="58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.030006 5031 scope.go:117] "RemoveContainer" containerID="da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e" Jan 29 09:01:28 crc kubenswrapper[5031]: E0129 09:01:28.030408 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e\": container with ID starting with da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e not found: ID does not exist" containerID="da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.030451 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e"} err="failed to get container status \"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e\": rpc error: code = NotFound desc = could not find container \"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e\": container with ID starting with da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e not found: ID does not exist" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.030485 5031 scope.go:117] "RemoveContainer" containerID="58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5" Jan 29 09:01:28 crc kubenswrapper[5031]: E0129 09:01:28.030880 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5\": container with ID starting with 58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5 not found: ID does not exist" containerID="58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.030907 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5"} err="failed to get container status \"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5\": rpc error: code = NotFound desc = could not find container \"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5\": container with ID starting with 58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5 not found: ID does not exist" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.030927 5031 scope.go:117] "RemoveContainer" containerID="da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.031194 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e"} err="failed to get container status \"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e\": rpc error: code = NotFound desc = could not find container \"da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e\": container with ID starting with da2f89c2ddf5f29e79a50c13a61f870cd115e97bc42d61f7916e5c8ab1d5ca0e not found: ID does not exist" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.031225 5031 scope.go:117] "RemoveContainer" containerID="58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.031482 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5"} err="failed to get container status \"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5\": rpc error: code = NotFound desc = could not find container \"58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5\": container with ID starting with 58381027eae241edf5f86b758fae55a26234a5db9d4ecd8080d537644ad7def5 not found: ID does not exist" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.048907 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd945c64-5938-4cc6-9eb5-17e013e36aba-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.049031 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689c2\" (UniqueName: \"kubernetes.io/projected/fd945c64-5938-4cc6-9eb5-17e013e36aba-kube-api-access-689c2\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.049117 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd945c64-5938-4cc6-9eb5-17e013e36aba-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.151619 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd945c64-5938-4cc6-9eb5-17e013e36aba-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.152220 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd945c64-5938-4cc6-9eb5-17e013e36aba-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.152303 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689c2\" (UniqueName: \"kubernetes.io/projected/fd945c64-5938-4cc6-9eb5-17e013e36aba-kube-api-access-689c2\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.157535 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd945c64-5938-4cc6-9eb5-17e013e36aba-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.157570 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd945c64-5938-4cc6-9eb5-17e013e36aba-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.172455 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689c2\" (UniqueName: \"kubernetes.io/projected/fd945c64-5938-4cc6-9eb5-17e013e36aba-kube-api-access-689c2\") pod \"nova-cell1-conductor-0\" (UID: \"fd945c64-5938-4cc6-9eb5-17e013e36aba\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.261680 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.271464 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.297938 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79214f48-df14-4431-a10b-8bfee7c0daac" path="/var/lib/kubelet/pods/79214f48-df14-4431-a10b-8bfee7c0daac/volumes" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.299612 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.302140 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.305564 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.305951 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.306927 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.311850 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.459748 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krt6\" (UniqueName: \"kubernetes.io/projected/bbad90a9-72c9-4b29-9169-3650a4769ffb-kube-api-access-7krt6\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.459799 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.459882 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbad90a9-72c9-4b29-9169-3650a4769ffb-logs\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.459914 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-config-data\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.459994 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.562685 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-config-data\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.563092 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.563564 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krt6\" (UniqueName: \"kubernetes.io/projected/bbad90a9-72c9-4b29-9169-3650a4769ffb-kube-api-access-7krt6\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.563602 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.563671 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbad90a9-72c9-4b29-9169-3650a4769ffb-logs\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.564086 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbad90a9-72c9-4b29-9169-3650a4769ffb-logs\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.568578 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-config-data\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.569044 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.569207 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.585905 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krt6\" (UniqueName: \"kubernetes.io/projected/bbad90a9-72c9-4b29-9169-3650a4769ffb-kube-api-access-7krt6\") pod \"nova-metadata-0\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.632059 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.781564 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:28 crc kubenswrapper[5031]: W0129 09:01:28.785989 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd945c64_5938_4cc6_9eb5_17e013e36aba.slice/crio-1dc233e0c93b410b6a5e0b8858d7fd5569bf9f7115070ebb88ecc9326c18a2c4 WatchSource:0}: Error finding container 1dc233e0c93b410b6a5e0b8858d7fd5569bf9f7115070ebb88ecc9326c18a2c4: Status 404 returned error can't find the container with id 1dc233e0c93b410b6a5e0b8858d7fd5569bf9f7115070ebb88ecc9326c18a2c4 Jan 29 09:01:28 crc kubenswrapper[5031]: I0129 09:01:28.901659 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fd945c64-5938-4cc6-9eb5-17e013e36aba","Type":"ContainerStarted","Data":"1dc233e0c93b410b6a5e0b8858d7fd5569bf9f7115070ebb88ecc9326c18a2c4"} Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.096163 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:29 crc kubenswrapper[5031]: E0129 09:01:29.422160 5031 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:01:29 crc kubenswrapper[5031]: E0129 09:01:29.423612 5031 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:01:29 crc kubenswrapper[5031]: E0129 09:01:29.425182 5031 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:01:29 crc kubenswrapper[5031]: E0129 09:01:29.425279 5031 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="93653dea-976b-4b7e-8735-679a21ddd8c9" containerName="nova-scheduler-scheduler" Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.920928 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbad90a9-72c9-4b29-9169-3650a4769ffb","Type":"ContainerStarted","Data":"c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5"} Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.921289 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbad90a9-72c9-4b29-9169-3650a4769ffb","Type":"ContainerStarted","Data":"7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45"} Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.921304 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbad90a9-72c9-4b29-9169-3650a4769ffb","Type":"ContainerStarted","Data":"206638579a91195d68db4524e1e92af8c3193e48e7b27651b6b9c7f0e62eebf8"} Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.924497 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fd945c64-5938-4cc6-9eb5-17e013e36aba","Type":"ContainerStarted","Data":"8a5b1a4b35d7c41e29d95bcd30c712f577526194488432bf7ce30c2b163e3b02"} Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.924638 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.942796 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.942763803 podStartE2EDuration="1.942763803s" podCreationTimestamp="2026-01-29 09:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:29.940977605 +0000 UTC m=+1370.440565557" watchObservedRunningTime="2026-01-29 09:01:29.942763803 +0000 UTC m=+1370.442351745" Jan 29 09:01:29 crc kubenswrapper[5031]: I0129 09:01:29.962672 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.962653161 podStartE2EDuration="2.962653161s" podCreationTimestamp="2026-01-29 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:29.956881345 +0000 UTC m=+1370.456469297" watchObservedRunningTime="2026-01-29 09:01:29.962653161 +0000 UTC m=+1370.462241113" Jan 29 09:01:30 crc kubenswrapper[5031]: I0129 09:01:30.938552 5031 generic.go:334] "Generic (PLEG): container finished" podID="93653dea-976b-4b7e-8735-679a21ddd8c9" containerID="a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3" exitCode=0 Jan 29 09:01:30 crc kubenswrapper[5031]: I0129 09:01:30.938625 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93653dea-976b-4b7e-8735-679a21ddd8c9","Type":"ContainerDied","Data":"a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3"} Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.155554 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.322814 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdcxp\" (UniqueName: \"kubernetes.io/projected/93653dea-976b-4b7e-8735-679a21ddd8c9-kube-api-access-fdcxp\") pod \"93653dea-976b-4b7e-8735-679a21ddd8c9\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.323559 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-combined-ca-bundle\") pod \"93653dea-976b-4b7e-8735-679a21ddd8c9\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.323719 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-config-data\") pod \"93653dea-976b-4b7e-8735-679a21ddd8c9\" (UID: \"93653dea-976b-4b7e-8735-679a21ddd8c9\") " Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.332273 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93653dea-976b-4b7e-8735-679a21ddd8c9-kube-api-access-fdcxp" (OuterVolumeSpecName: "kube-api-access-fdcxp") pod "93653dea-976b-4b7e-8735-679a21ddd8c9" (UID: "93653dea-976b-4b7e-8735-679a21ddd8c9"). InnerVolumeSpecName "kube-api-access-fdcxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.356044 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-config-data" (OuterVolumeSpecName: "config-data") pod "93653dea-976b-4b7e-8735-679a21ddd8c9" (UID: "93653dea-976b-4b7e-8735-679a21ddd8c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.366352 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93653dea-976b-4b7e-8735-679a21ddd8c9" (UID: "93653dea-976b-4b7e-8735-679a21ddd8c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.425840 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.425891 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdcxp\" (UniqueName: \"kubernetes.io/projected/93653dea-976b-4b7e-8735-679a21ddd8c9-kube-api-access-fdcxp\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.425903 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93653dea-976b-4b7e-8735-679a21ddd8c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.834142 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.939405 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cf4471-0a46-4b2c-ba06-f17ed494c626-logs\") pod \"52cf4471-0a46-4b2c-ba06-f17ed494c626\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.939512 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-config-data\") pod \"52cf4471-0a46-4b2c-ba06-f17ed494c626\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.939567 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg6l7\" (UniqueName: \"kubernetes.io/projected/52cf4471-0a46-4b2c-ba06-f17ed494c626-kube-api-access-tg6l7\") pod \"52cf4471-0a46-4b2c-ba06-f17ed494c626\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.939595 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-combined-ca-bundle\") pod \"52cf4471-0a46-4b2c-ba06-f17ed494c626\" (UID: \"52cf4471-0a46-4b2c-ba06-f17ed494c626\") " Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.940004 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cf4471-0a46-4b2c-ba06-f17ed494c626-logs" (OuterVolumeSpecName: "logs") pod "52cf4471-0a46-4b2c-ba06-f17ed494c626" (UID: "52cf4471-0a46-4b2c-ba06-f17ed494c626"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.949278 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cf4471-0a46-4b2c-ba06-f17ed494c626-kube-api-access-tg6l7" (OuterVolumeSpecName: "kube-api-access-tg6l7") pod "52cf4471-0a46-4b2c-ba06-f17ed494c626" (UID: "52cf4471-0a46-4b2c-ba06-f17ed494c626"). InnerVolumeSpecName "kube-api-access-tg6l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.952130 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.952145 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93653dea-976b-4b7e-8735-679a21ddd8c9","Type":"ContainerDied","Data":"d6586ca240ed353d878b7fe9747d733886f089c800b7e465988b24a67befa696"} Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.952198 5031 scope.go:117] "RemoveContainer" containerID="a33e235997f9d05d7e5903059de7b0824ff4a183288b7b955bcd81d193b78bd3" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.957816 5031 generic.go:334] "Generic (PLEG): container finished" podID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerID="e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108" exitCode=0 Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.957880 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52cf4471-0a46-4b2c-ba06-f17ed494c626","Type":"ContainerDied","Data":"e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108"} Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.957912 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52cf4471-0a46-4b2c-ba06-f17ed494c626","Type":"ContainerDied","Data":"d9e292699d77176afd3ba950c5d94dd5603ebae653fe0480a5933f65cc4e7c78"} Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.957993 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.985513 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-config-data" (OuterVolumeSpecName: "config-data") pod "52cf4471-0a46-4b2c-ba06-f17ed494c626" (UID: "52cf4471-0a46-4b2c-ba06-f17ed494c626"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:31 crc kubenswrapper[5031]: I0129 09:01:31.987559 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52cf4471-0a46-4b2c-ba06-f17ed494c626" (UID: "52cf4471-0a46-4b2c-ba06-f17ed494c626"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.041504 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cf4471-0a46-4b2c-ba06-f17ed494c626-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.043406 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.043557 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg6l7\" (UniqueName: \"kubernetes.io/projected/52cf4471-0a46-4b2c-ba06-f17ed494c626-kube-api-access-tg6l7\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.043642 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cf4471-0a46-4b2c-ba06-f17ed494c626-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.061341 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.065427 5031 scope.go:117] "RemoveContainer" containerID="e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.093058 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.105825 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: E0129 09:01:32.106315 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-log" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.106330 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-log" Jan 29 09:01:32 crc kubenswrapper[5031]: E0129 09:01:32.106359 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-api" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.106378 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-api" Jan 29 09:01:32 crc kubenswrapper[5031]: E0129 09:01:32.106391 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93653dea-976b-4b7e-8735-679a21ddd8c9" containerName="nova-scheduler-scheduler" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.106404 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="93653dea-976b-4b7e-8735-679a21ddd8c9" containerName="nova-scheduler-scheduler" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.106592 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-api" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.106633 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="93653dea-976b-4b7e-8735-679a21ddd8c9" containerName="nova-scheduler-scheduler" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.106654 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" containerName="nova-api-log" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.107284 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.109330 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.113744 5031 scope.go:117] "RemoveContainer" containerID="109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.131815 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.148640 5031 scope.go:117] "RemoveContainer" containerID="e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108" Jan 29 09:01:32 crc kubenswrapper[5031]: E0129 09:01:32.149115 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108\": container with ID starting with e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108 not found: ID does not exist" containerID="e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.149159 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108"} err="failed to get container status \"e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108\": rpc error: code = NotFound desc = could not find container \"e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108\": container with ID starting with e8d98517ed9f4689bd447f3dbeef3e07d82584ed4e12a833f88f94e937c0a108 not found: ID does not exist" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.149186 5031 scope.go:117] "RemoveContainer" containerID="109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54" Jan 29 09:01:32 crc kubenswrapper[5031]: E0129 09:01:32.149584 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54\": container with ID starting with 109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54 not found: ID does not exist" containerID="109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.149741 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54"} err="failed to get container status \"109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54\": rpc error: code = NotFound desc = could not find container \"109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54\": container with ID starting with 109aeb37e5c9f4d3a4f036dfbbefdd3580eac6a72830c13352870a707e523f54 not found: ID does not exist" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.247883 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-config-data\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.247967 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdqr\" (UniqueName: \"kubernetes.io/projected/e89e2668-4736-4be0-b913-4dbf458784e3-kube-api-access-2sdqr\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.248140 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.293453 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93653dea-976b-4b7e-8735-679a21ddd8c9" path="/var/lib/kubelet/pods/93653dea-976b-4b7e-8735-679a21ddd8c9/volumes" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.294125 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.306661 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.318383 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.320247 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.322572 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.330710 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.349810 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.349956 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-config-data\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.350014 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdqr\" (UniqueName: \"kubernetes.io/projected/e89e2668-4736-4be0-b913-4dbf458784e3-kube-api-access-2sdqr\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.355694 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.358049 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-config-data\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.366529 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdqr\" (UniqueName: \"kubernetes.io/projected/e89e2668-4736-4be0-b913-4dbf458784e3-kube-api-access-2sdqr\") pod \"nova-scheduler-0\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.435837 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.451716 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bde5a5-398e-447e-b8a4-91654bfe6841-logs\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.455345 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlrg\" (UniqueName: \"kubernetes.io/projected/80bde5a5-398e-447e-b8a4-91654bfe6841-kube-api-access-rtlrg\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.455433 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-config-data\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.455542 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.558259 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.558763 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bde5a5-398e-447e-b8a4-91654bfe6841-logs\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.558854 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlrg\" (UniqueName: \"kubernetes.io/projected/80bde5a5-398e-447e-b8a4-91654bfe6841-kube-api-access-rtlrg\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.558880 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-config-data\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.559809 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bde5a5-398e-447e-b8a4-91654bfe6841-logs\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.562192 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.563615 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-config-data\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.580227 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlrg\" (UniqueName: \"kubernetes.io/projected/80bde5a5-398e-447e-b8a4-91654bfe6841-kube-api-access-rtlrg\") pod \"nova-api-0\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.639590 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.864557 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:32 crc kubenswrapper[5031]: I0129 09:01:32.970017 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e89e2668-4736-4be0-b913-4dbf458784e3","Type":"ContainerStarted","Data":"076782b14a7a4244574f712fcfdec3d4b8eee0aa610aab3802570b43840e12db"} Jan 29 09:01:33 crc kubenswrapper[5031]: I0129 09:01:33.118942 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:33 crc kubenswrapper[5031]: W0129 09:01:33.120038 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80bde5a5_398e_447e_b8a4_91654bfe6841.slice/crio-d3be9e60c5ab17513b8ee474fab8c05c6ca883f8c24241c5c37b1c4dc5f58d74 WatchSource:0}: Error finding container d3be9e60c5ab17513b8ee474fab8c05c6ca883f8c24241c5c37b1c4dc5f58d74: Status 404 returned error can't find the container with id d3be9e60c5ab17513b8ee474fab8c05c6ca883f8c24241c5c37b1c4dc5f58d74 Jan 29 09:01:33 crc kubenswrapper[5031]: I0129 09:01:33.632831 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:01:33 crc kubenswrapper[5031]: I0129 09:01:33.633237 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:01:33 crc kubenswrapper[5031]: I0129 09:01:33.979345 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80bde5a5-398e-447e-b8a4-91654bfe6841","Type":"ContainerStarted","Data":"b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407"} Jan 29 09:01:33 crc kubenswrapper[5031]: I0129 09:01:33.979424 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80bde5a5-398e-447e-b8a4-91654bfe6841","Type":"ContainerStarted","Data":"ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29"} Jan 29 09:01:33 crc kubenswrapper[5031]: I0129 09:01:33.979437 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80bde5a5-398e-447e-b8a4-91654bfe6841","Type":"ContainerStarted","Data":"d3be9e60c5ab17513b8ee474fab8c05c6ca883f8c24241c5c37b1c4dc5f58d74"} Jan 29 09:01:33 crc kubenswrapper[5031]: I0129 09:01:33.982994 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e89e2668-4736-4be0-b913-4dbf458784e3","Type":"ContainerStarted","Data":"3645e7083c72b0aae5df0626f56b865340ea17eba3edde86617311dee41f0ee8"} Jan 29 09:01:34 crc kubenswrapper[5031]: I0129 09:01:34.006616 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.006573983 podStartE2EDuration="2.006573983s" podCreationTimestamp="2026-01-29 09:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:33.997857247 +0000 UTC m=+1374.497445199" watchObservedRunningTime="2026-01-29 09:01:34.006573983 +0000 UTC m=+1374.506161935" Jan 29 09:01:34 crc kubenswrapper[5031]: I0129 09:01:34.030671 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.030648985 podStartE2EDuration="2.030648985s" podCreationTimestamp="2026-01-29 09:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:34.020918112 +0000 UTC m=+1374.520506064" watchObservedRunningTime="2026-01-29 09:01:34.030648985 +0000 UTC m=+1374.530236937" Jan 29 09:01:34 crc kubenswrapper[5031]: I0129 09:01:34.295604 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52cf4471-0a46-4b2c-ba06-f17ed494c626" path="/var/lib/kubelet/pods/52cf4471-0a46-4b2c-ba06-f17ed494c626/volumes" Jan 29 09:01:37 crc kubenswrapper[5031]: I0129 09:01:37.436199 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.342340 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.496907 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.497694 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.497785 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.498894 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"968b7ae674e15f331a40354ae3280aca1a2d384b002cb22e9f641c2b3f0a41ed"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.498965 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://968b7ae674e15f331a40354ae3280aca1a2d384b002cb22e9f641c2b3f0a41ed" gracePeriod=600 Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.633134 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:01:38 crc kubenswrapper[5031]: I0129 09:01:38.633189 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:01:39 crc kubenswrapper[5031]: I0129 09:01:39.035689 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="968b7ae674e15f331a40354ae3280aca1a2d384b002cb22e9f641c2b3f0a41ed" exitCode=0 Jan 29 09:01:39 crc kubenswrapper[5031]: I0129 09:01:39.036025 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"968b7ae674e15f331a40354ae3280aca1a2d384b002cb22e9f641c2b3f0a41ed"} Jan 29 09:01:39 crc kubenswrapper[5031]: I0129 09:01:39.036053 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe"} Jan 29 09:01:39 crc kubenswrapper[5031]: I0129 09:01:39.036071 5031 scope.go:117] "RemoveContainer" containerID="e25b3544ed82f73d3e69370fae71f9310174a457f060c5ae77619bf418f1fb57" Jan 29 09:01:39 crc kubenswrapper[5031]: I0129 09:01:39.648577 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:39 crc kubenswrapper[5031]: I0129 09:01:39.648577 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:42 crc kubenswrapper[5031]: I0129 09:01:42.436233 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:01:42 crc kubenswrapper[5031]: I0129 09:01:42.464538 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:01:42 crc kubenswrapper[5031]: I0129 09:01:42.640595 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:01:42 crc kubenswrapper[5031]: I0129 09:01:42.640650 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:01:42 crc kubenswrapper[5031]: I0129 09:01:42.987604 5031 patch_prober.go:28] interesting pod/router-default-5444994796-4v677 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:01:42 crc kubenswrapper[5031]: I0129 09:01:42.987996 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-4v677" podUID="8f1b85d0-d1d7-435f-aee3-2953e7a8ad83" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.083612 5031 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-dh6cs container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.083695 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" podUID="cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.083629 5031 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-dh6cs container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.083777 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dh6cs" podUID="cc9164e3-26b6-4f60-bf59-8cd52e5f7b0a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.103218 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.117136 5031 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tvddp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.117194 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvddp" podUID="e577602e-26da-4f65-8997-38b52ae67d82" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.180505 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-tsvjs" podUID="fb9eb323-2fa1-4562-a71f-ccb3f771395b" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.321689 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7777f7948d-dxh4l" podUID="417f7fc8-934e-415e-89cc-fb09ba21917e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.722548 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:43 crc kubenswrapper[5031]: I0129 09:01:43.722577 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:48 crc kubenswrapper[5031]: I0129 09:01:48.639863 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:01:48 crc kubenswrapper[5031]: I0129 09:01:48.640534 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:01:48 crc kubenswrapper[5031]: I0129 09:01:48.647318 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:01:48 crc kubenswrapper[5031]: I0129 09:01:48.647618 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.137972 5031 generic.go:334] "Generic (PLEG): container finished" podID="35f621af-6032-4595-b8d6-35af999c21b5" containerID="bdbde1af0deb68734a82d570d681d6d66b939ea269ef9332a082762330fb319b" exitCode=137 Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.138054 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35f621af-6032-4595-b8d6-35af999c21b5","Type":"ContainerDied","Data":"bdbde1af0deb68734a82d570d681d6d66b939ea269ef9332a082762330fb319b"} Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.138344 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35f621af-6032-4595-b8d6-35af999c21b5","Type":"ContainerDied","Data":"0918ab7e493683221ceb8566e6cefc95eeaea80c28eff0eaf5f8477e19b078b8"} Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.138382 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0918ab7e493683221ceb8566e6cefc95eeaea80c28eff0eaf5f8477e19b078b8" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.157552 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.233015 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-config-data\") pod \"35f621af-6032-4595-b8d6-35af999c21b5\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.233640 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-combined-ca-bundle\") pod \"35f621af-6032-4595-b8d6-35af999c21b5\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.233773 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh6f\" (UniqueName: \"kubernetes.io/projected/35f621af-6032-4595-b8d6-35af999c21b5-kube-api-access-mzh6f\") pod \"35f621af-6032-4595-b8d6-35af999c21b5\" (UID: \"35f621af-6032-4595-b8d6-35af999c21b5\") " Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.245640 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f621af-6032-4595-b8d6-35af999c21b5-kube-api-access-mzh6f" (OuterVolumeSpecName: "kube-api-access-mzh6f") pod "35f621af-6032-4595-b8d6-35af999c21b5" (UID: "35f621af-6032-4595-b8d6-35af999c21b5"). InnerVolumeSpecName "kube-api-access-mzh6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.286865 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35f621af-6032-4595-b8d6-35af999c21b5" (UID: "35f621af-6032-4595-b8d6-35af999c21b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.299743 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-config-data" (OuterVolumeSpecName: "config-data") pod "35f621af-6032-4595-b8d6-35af999c21b5" (UID: "35f621af-6032-4595-b8d6-35af999c21b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.337030 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.337113 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzh6f\" (UniqueName: \"kubernetes.io/projected/35f621af-6032-4595-b8d6-35af999c21b5-kube-api-access-mzh6f\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:50 crc kubenswrapper[5031]: I0129 09:01:50.337132 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f621af-6032-4595-b8d6-35af999c21b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.148553 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.192397 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.210355 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.221964 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:51 crc kubenswrapper[5031]: E0129 09:01:51.222823 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f621af-6032-4595-b8d6-35af999c21b5" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.222920 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f621af-6032-4595-b8d6-35af999c21b5" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.223261 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f621af-6032-4595-b8d6-35af999c21b5" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.224399 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.226949 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.228773 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.229179 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.237250 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.361146 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.361725 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.361777 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.362043 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.362539 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kkl\" (UniqueName: \"kubernetes.io/projected/15c7d35a-0f80-4823-8d8d-371e1f76f869-kube-api-access-n9kkl\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.464969 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9kkl\" (UniqueName: \"kubernetes.io/projected/15c7d35a-0f80-4823-8d8d-371e1f76f869-kube-api-access-n9kkl\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.465039 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.465072 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.465120 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.465808 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.469920 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.470584 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.472174 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.472897 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c7d35a-0f80-4823-8d8d-371e1f76f869-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.481234 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9kkl\" (UniqueName: \"kubernetes.io/projected/15c7d35a-0f80-4823-8d8d-371e1f76f869-kube-api-access-n9kkl\") pod \"nova-cell1-novncproxy-0\" (UID: \"15c7d35a-0f80-4823-8d8d-371e1f76f869\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:51 crc kubenswrapper[5031]: I0129 09:01:51.544995 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.001428 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.160155 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15c7d35a-0f80-4823-8d8d-371e1f76f869","Type":"ContainerStarted","Data":"1932734a61018aada743eda047a6669446ad670780f9eeb2802b5a2ef110d11b"} Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.295238 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f621af-6032-4595-b8d6-35af999c21b5" path="/var/lib/kubelet/pods/35f621af-6032-4595-b8d6-35af999c21b5/volumes" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.644223 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.644745 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.645113 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.645166 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.648529 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.650131 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.835577 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-2g7mg"] Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.837977 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.854347 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-2g7mg"] Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.997152 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-config\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.997620 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.997663 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7frq\" (UniqueName: \"kubernetes.io/projected/326ac964-161b-4a55-9bc5-ba303d325d27-kube-api-access-p7frq\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.997703 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:52 crc kubenswrapper[5031]: I0129 09:01:52.998023 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.100177 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.100654 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-config\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.100798 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.100903 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7frq\" (UniqueName: \"kubernetes.io/projected/326ac964-161b-4a55-9bc5-ba303d325d27-kube-api-access-p7frq\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.101014 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.101780 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-config\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.101842 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.102174 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.102311 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.124021 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7frq\" (UniqueName: \"kubernetes.io/projected/326ac964-161b-4a55-9bc5-ba303d325d27-kube-api-access-p7frq\") pod \"dnsmasq-dns-68d4b6d797-2g7mg\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.166764 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.170467 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15c7d35a-0f80-4823-8d8d-371e1f76f869","Type":"ContainerStarted","Data":"4532a25d3f0665e713c7095d83b21cdc3382a15db5b040ad7f03151a330a8240"} Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.189701 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.189683224 podStartE2EDuration="2.189683224s" podCreationTimestamp="2026-01-29 09:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:53.188859502 +0000 UTC m=+1393.688447454" watchObservedRunningTime="2026-01-29 09:01:53.189683224 +0000 UTC m=+1393.689271176" Jan 29 09:01:53 crc kubenswrapper[5031]: I0129 09:01:53.671131 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-2g7mg"] Jan 29 09:01:54 crc kubenswrapper[5031]: I0129 09:01:54.182593 5031 generic.go:334] "Generic (PLEG): container finished" podID="326ac964-161b-4a55-9bc5-ba303d325d27" containerID="905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2" exitCode=0 Jan 29 09:01:54 crc kubenswrapper[5031]: I0129 09:01:54.182763 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" event={"ID":"326ac964-161b-4a55-9bc5-ba303d325d27","Type":"ContainerDied","Data":"905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2"} Jan 29 09:01:54 crc kubenswrapper[5031]: I0129 09:01:54.182964 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" event={"ID":"326ac964-161b-4a55-9bc5-ba303d325d27","Type":"ContainerStarted","Data":"a7f0f556c8f121fb9ce2aeaee2af3d66d88f05a18ba855f69337bbce40a1c822"} Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.195060 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" event={"ID":"326ac964-161b-4a55-9bc5-ba303d325d27","Type":"ContainerStarted","Data":"0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416"} Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.196226 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.220345 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" podStartSLOduration=3.22032163 podStartE2EDuration="3.22032163s" podCreationTimestamp="2026-01-29 09:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:55.213527197 +0000 UTC m=+1395.713115149" watchObservedRunningTime="2026-01-29 09:01:55.22032163 +0000 UTC m=+1395.719909582" Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.461462 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.461853 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-central-agent" containerID="cri-o://8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9" gracePeriod=30 Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.461935 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="sg-core" containerID="cri-o://66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df" gracePeriod=30 Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.462003 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="proxy-httpd" containerID="cri-o://405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853" gracePeriod=30 Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.462019 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-notification-agent" containerID="cri-o://caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7" gracePeriod=30 Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.609277 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.609879 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-log" containerID="cri-o://ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29" gracePeriod=30 Jan 29 09:01:55 crc kubenswrapper[5031]: I0129 09:01:55.610087 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-api" containerID="cri-o://b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407" gracePeriod=30 Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.208855 5031 generic.go:334] "Generic (PLEG): container finished" podID="8c4a414d-85d4-4586-a252-47b7db649478" containerID="405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853" exitCode=0 Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.208894 5031 generic.go:334] "Generic (PLEG): container finished" podID="8c4a414d-85d4-4586-a252-47b7db649478" containerID="66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df" exitCode=2 Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.208906 5031 generic.go:334] "Generic (PLEG): container finished" podID="8c4a414d-85d4-4586-a252-47b7db649478" containerID="8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9" exitCode=0 Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.208985 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerDied","Data":"405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853"} Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.209012 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerDied","Data":"66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df"} Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.209025 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerDied","Data":"8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9"} Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.211172 5031 generic.go:334] "Generic (PLEG): container finished" podID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerID="ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29" exitCode=143 Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.211333 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80bde5a5-398e-447e-b8a4-91654bfe6841","Type":"ContainerDied","Data":"ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29"} Jan 29 09:01:56 crc kubenswrapper[5031]: I0129 09:01:56.545847 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.230239 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.240513 5031 generic.go:334] "Generic (PLEG): container finished" podID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerID="b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407" exitCode=0 Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.240594 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.240609 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80bde5a5-398e-447e-b8a4-91654bfe6841","Type":"ContainerDied","Data":"b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407"} Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.240754 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80bde5a5-398e-447e-b8a4-91654bfe6841","Type":"ContainerDied","Data":"d3be9e60c5ab17513b8ee474fab8c05c6ca883f8c24241c5c37b1c4dc5f58d74"} Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.240788 5031 scope.go:117] "RemoveContainer" containerID="b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.286334 5031 scope.go:117] "RemoveContainer" containerID="ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.322931 5031 scope.go:117] "RemoveContainer" containerID="b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407" Jan 29 09:01:59 crc kubenswrapper[5031]: E0129 09:01:59.323482 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407\": container with ID starting with b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407 not found: ID does not exist" containerID="b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.323524 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407"} err="failed to get container status \"b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407\": rpc error: code = NotFound desc = could not find container \"b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407\": container with ID starting with b91f4a8cfe4ee616c9ed4f31a70e32d5c4473b67eb44489cd8b8e9956cf49407 not found: ID does not exist" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.323551 5031 scope.go:117] "RemoveContainer" containerID="ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29" Jan 29 09:01:59 crc kubenswrapper[5031]: E0129 09:01:59.324022 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29\": container with ID starting with ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29 not found: ID does not exist" containerID="ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.324070 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29"} err="failed to get container status \"ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29\": rpc error: code = NotFound desc = could not find container \"ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29\": container with ID starting with ec4f44e413fc8491b0df4591fe3c909b3a8875e3e2db58ff35bbfc504a1c2f29 not found: ID does not exist" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.343826 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlrg\" (UniqueName: \"kubernetes.io/projected/80bde5a5-398e-447e-b8a4-91654bfe6841-kube-api-access-rtlrg\") pod \"80bde5a5-398e-447e-b8a4-91654bfe6841\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.344085 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-combined-ca-bundle\") pod \"80bde5a5-398e-447e-b8a4-91654bfe6841\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.344171 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-config-data\") pod \"80bde5a5-398e-447e-b8a4-91654bfe6841\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.344276 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bde5a5-398e-447e-b8a4-91654bfe6841-logs\") pod \"80bde5a5-398e-447e-b8a4-91654bfe6841\" (UID: \"80bde5a5-398e-447e-b8a4-91654bfe6841\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.344998 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80bde5a5-398e-447e-b8a4-91654bfe6841-logs" (OuterVolumeSpecName: "logs") pod "80bde5a5-398e-447e-b8a4-91654bfe6841" (UID: "80bde5a5-398e-447e-b8a4-91654bfe6841"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.353669 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80bde5a5-398e-447e-b8a4-91654bfe6841-kube-api-access-rtlrg" (OuterVolumeSpecName: "kube-api-access-rtlrg") pod "80bde5a5-398e-447e-b8a4-91654bfe6841" (UID: "80bde5a5-398e-447e-b8a4-91654bfe6841"). InnerVolumeSpecName "kube-api-access-rtlrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.383745 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80bde5a5-398e-447e-b8a4-91654bfe6841" (UID: "80bde5a5-398e-447e-b8a4-91654bfe6841"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.386207 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-config-data" (OuterVolumeSpecName: "config-data") pod "80bde5a5-398e-447e-b8a4-91654bfe6841" (UID: "80bde5a5-398e-447e-b8a4-91654bfe6841"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.446873 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bde5a5-398e-447e-b8a4-91654bfe6841-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.446926 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtlrg\" (UniqueName: \"kubernetes.io/projected/80bde5a5-398e-447e-b8a4-91654bfe6841-kube-api-access-rtlrg\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.446949 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.446965 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bde5a5-398e-447e-b8a4-91654bfe6841-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.591483 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.603993 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.626748 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:59 crc kubenswrapper[5031]: E0129 09:01:59.628893 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-log" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.628917 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-log" Jan 29 09:01:59 crc kubenswrapper[5031]: E0129 09:01:59.628939 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-api" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.628946 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-api" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.629177 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-log" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.629200 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" containerName="nova-api-api" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.638698 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.638824 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.647659 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.647792 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.651638 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.746359 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.751254 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.751401 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.751504 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-config-data\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.751539 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.751584 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b67e1ca-9d19-4489-a44d-03e70de4854a-logs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.751624 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbhxh\" (UniqueName: \"kubernetes.io/projected/9b67e1ca-9d19-4489-a44d-03e70de4854a-kube-api-access-nbhxh\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853156 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-log-httpd\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853425 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-combined-ca-bundle\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853449 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-run-httpd\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853530 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-sg-core-conf-yaml\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853559 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4qjg\" (UniqueName: \"kubernetes.io/projected/8c4a414d-85d4-4586-a252-47b7db649478-kube-api-access-c4qjg\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853627 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-ceilometer-tls-certs\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853645 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-config-data\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853667 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-scripts\") pod \"8c4a414d-85d4-4586-a252-47b7db649478\" (UID: \"8c4a414d-85d4-4586-a252-47b7db649478\") " Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853752 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853925 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-config-data\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853955 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.853985 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b67e1ca-9d19-4489-a44d-03e70de4854a-logs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.854014 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbhxh\" (UniqueName: \"kubernetes.io/projected/9b67e1ca-9d19-4489-a44d-03e70de4854a-kube-api-access-nbhxh\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.854047 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.854114 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.854171 5031 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.854704 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.858360 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4a414d-85d4-4586-a252-47b7db649478-kube-api-access-c4qjg" (OuterVolumeSpecName: "kube-api-access-c4qjg") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "kube-api-access-c4qjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.860750 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b67e1ca-9d19-4489-a44d-03e70de4854a-logs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.864133 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.864389 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-config-data\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.865946 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.867420 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-scripts" (OuterVolumeSpecName: "scripts") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.872352 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.879273 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbhxh\" (UniqueName: \"kubernetes.io/projected/9b67e1ca-9d19-4489-a44d-03e70de4854a-kube-api-access-nbhxh\") pod \"nova-api-0\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.892684 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.932068 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.957232 5031 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c4a414d-85d4-4586-a252-47b7db649478-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.957272 5031 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.957287 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4qjg\" (UniqueName: \"kubernetes.io/projected/8c4a414d-85d4-4586-a252-47b7db649478-kube-api-access-c4qjg\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.957299 5031 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.957309 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.961619 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.977176 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:59 crc kubenswrapper[5031]: I0129 09:01:59.996141 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-config-data" (OuterVolumeSpecName: "config-data") pod "8c4a414d-85d4-4586-a252-47b7db649478" (UID: "8c4a414d-85d4-4586-a252-47b7db649478"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.059635 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.059686 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c4a414d-85d4-4586-a252-47b7db649478-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.269276 5031 generic.go:334] "Generic (PLEG): container finished" podID="8c4a414d-85d4-4586-a252-47b7db649478" containerID="caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7" exitCode=0 Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.269387 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerDied","Data":"caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7"} Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.269699 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c4a414d-85d4-4586-a252-47b7db649478","Type":"ContainerDied","Data":"ff6df5cd21d1a2769f9b095ac099e5fbe0dbb48c79ff022df7370035ebf974bd"} Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.269728 5031 scope.go:117] "RemoveContainer" containerID="405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.269414 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.304966 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80bde5a5-398e-447e-b8a4-91654bfe6841" path="/var/lib/kubelet/pods/80bde5a5-398e-447e-b8a4-91654bfe6841/volumes" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.321356 5031 scope.go:117] "RemoveContainer" containerID="66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.330947 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.348312 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.365219 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.370641 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-notification-agent" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.370684 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-notification-agent" Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.370746 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="proxy-httpd" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.370756 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="proxy-httpd" Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.370790 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="sg-core" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.370799 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="sg-core" Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.370817 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-central-agent" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.370827 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-central-agent" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.376828 5031 scope.go:117] "RemoveContainer" containerID="caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.377032 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="sg-core" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.377085 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-central-agent" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.377096 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="ceilometer-notification-agent" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.377132 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4a414d-85d4-4586-a252-47b7db649478" containerName="proxy-httpd" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.391123 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.402887 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.403033 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.403203 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.419357 5031 scope.go:117] "RemoveContainer" containerID="8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.431924 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.443971 5031 scope.go:117] "RemoveContainer" containerID="405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853" Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.444455 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853\": container with ID starting with 405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853 not found: ID does not exist" containerID="405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.444518 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853"} err="failed to get container status \"405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853\": rpc error: code = NotFound desc = could not find container \"405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853\": container with ID starting with 405933261b91fb231d9ff746ed150588080226d747f5e01023fff3b4694be853 not found: ID does not exist" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.444551 5031 scope.go:117] "RemoveContainer" containerID="66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df" Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.444913 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df\": container with ID starting with 66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df not found: ID does not exist" containerID="66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.444950 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df"} err="failed to get container status \"66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df\": rpc error: code = NotFound desc = could not find container \"66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df\": container with ID starting with 66a40828e50597d0e1c9f36a1eebf976c2d763d791c98390ea283c2cc21739df not found: ID does not exist" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.444972 5031 scope.go:117] "RemoveContainer" containerID="caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7" Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.445195 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7\": container with ID starting with caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7 not found: ID does not exist" containerID="caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.445219 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7"} err="failed to get container status \"caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7\": rpc error: code = NotFound desc = could not find container \"caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7\": container with ID starting with caa6286c7ab0bd2a4ce561e5fcf908a57bfe75ad07935a23c4e4bcc44a70c3b7 not found: ID does not exist" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.445232 5031 scope.go:117] "RemoveContainer" containerID="8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9" Jan 29 09:02:00 crc kubenswrapper[5031]: E0129 09:02:00.446409 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9\": container with ID starting with 8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9 not found: ID does not exist" containerID="8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.446451 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9"} err="failed to get container status \"8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9\": rpc error: code = NotFound desc = could not find container \"8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9\": container with ID starting with 8ec9c68940092628799e5a09291fad8d92b8b0c83f1da4cf6ddaa4ff62f4cdd9 not found: ID does not exist" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466419 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-scripts\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466533 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-log-httpd\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466662 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466698 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466774 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-config-data\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466801 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrtg\" (UniqueName: \"kubernetes.io/projected/96cf2a84-0927-4208-8959-96682bf54375-kube-api-access-gfrtg\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466827 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-run-httpd\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.466855 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.477417 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.568668 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfrtg\" (UniqueName: \"kubernetes.io/projected/96cf2a84-0927-4208-8959-96682bf54375-kube-api-access-gfrtg\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.568766 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-run-httpd\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.568837 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.568872 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-scripts\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.569182 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-log-httpd\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.569470 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.569516 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.569719 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-config-data\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.570449 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-run-httpd\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.571102 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-log-httpd\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.585411 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.585466 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.585486 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.586026 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-scripts\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.588939 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfrtg\" (UniqueName: \"kubernetes.io/projected/96cf2a84-0927-4208-8959-96682bf54375-kube-api-access-gfrtg\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.588976 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-config-data\") pod \"ceilometer-0\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " pod="openstack/ceilometer-0" Jan 29 09:02:00 crc kubenswrapper[5031]: I0129 09:02:00.723241 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.234644 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:01 crc kubenswrapper[5031]: W0129 09:02:01.240589 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96cf2a84_0927_4208_8959_96682bf54375.slice/crio-6b4c58cef4103e449cee55c7430a2420c269f5870d31303721b29535605277ed WatchSource:0}: Error finding container 6b4c58cef4103e449cee55c7430a2420c269f5870d31303721b29535605277ed: Status 404 returned error can't find the container with id 6b4c58cef4103e449cee55c7430a2420c269f5870d31303721b29535605277ed Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.244643 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.282187 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b67e1ca-9d19-4489-a44d-03e70de4854a","Type":"ContainerStarted","Data":"69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db"} Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.282624 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b67e1ca-9d19-4489-a44d-03e70de4854a","Type":"ContainerStarted","Data":"b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175"} Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.282640 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b67e1ca-9d19-4489-a44d-03e70de4854a","Type":"ContainerStarted","Data":"97dd260cbe1a433599c587761f2d9854f0532c574db6701a8524472148eb870e"} Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.286626 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerStarted","Data":"6b4c58cef4103e449cee55c7430a2420c269f5870d31303721b29535605277ed"} Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.319883 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.319861045 podStartE2EDuration="2.319861045s" podCreationTimestamp="2026-01-29 09:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:01.308485707 +0000 UTC m=+1401.808073669" watchObservedRunningTime="2026-01-29 09:02:01.319861045 +0000 UTC m=+1401.819448997" Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.546049 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:02:01 crc kubenswrapper[5031]: I0129 09:02:01.576440 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.298180 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4a414d-85d4-4586-a252-47b7db649478" path="/var/lib/kubelet/pods/8c4a414d-85d4-4586-a252-47b7db649478/volumes" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.299873 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerStarted","Data":"d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156"} Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.316608 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.607033 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tvcms"] Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.608179 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.611064 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.612173 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.630439 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tvcms"] Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.732711 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rhl\" (UniqueName: \"kubernetes.io/projected/7b2e0d86-555d-42e1-beca-00cd83b2c90a-kube-api-access-c5rhl\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.733131 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-config-data\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.733171 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.733199 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-scripts\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.835459 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-config-data\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.835573 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.835625 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-scripts\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.835700 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rhl\" (UniqueName: \"kubernetes.io/projected/7b2e0d86-555d-42e1-beca-00cd83b2c90a-kube-api-access-c5rhl\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.840704 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-config-data\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.841293 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.842446 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-scripts\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:02 crc kubenswrapper[5031]: I0129 09:02:02.858950 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rhl\" (UniqueName: \"kubernetes.io/projected/7b2e0d86-555d-42e1-beca-00cd83b2c90a-kube-api-access-c5rhl\") pod \"nova-cell1-cell-mapping-tvcms\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:03 crc kubenswrapper[5031]: I0129 09:02:03.048116 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:03 crc kubenswrapper[5031]: I0129 09:02:03.168626 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:02:03 crc kubenswrapper[5031]: I0129 09:02:03.251076 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-p92v8"] Jan 29 09:02:03 crc kubenswrapper[5031]: I0129 09:02:03.251395 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" podUID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerName="dnsmasq-dns" containerID="cri-o://018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004" gracePeriod=10 Jan 29 09:02:03 crc kubenswrapper[5031]: I0129 09:02:03.312821 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerStarted","Data":"aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991"} Jan 29 09:02:03 crc kubenswrapper[5031]: I0129 09:02:03.600039 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tvcms"] Jan 29 09:02:03 crc kubenswrapper[5031]: I0129 09:02:03.905232 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.060791 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-nb\") pod \"86f4cc28-e60d-4c01-811a-b4a200372cfa\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.061131 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-dns-svc\") pod \"86f4cc28-e60d-4c01-811a-b4a200372cfa\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.061161 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-sb\") pod \"86f4cc28-e60d-4c01-811a-b4a200372cfa\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.061240 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-config\") pod \"86f4cc28-e60d-4c01-811a-b4a200372cfa\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.061280 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmplp\" (UniqueName: \"kubernetes.io/projected/86f4cc28-e60d-4c01-811a-b4a200372cfa-kube-api-access-tmplp\") pod \"86f4cc28-e60d-4c01-811a-b4a200372cfa\" (UID: \"86f4cc28-e60d-4c01-811a-b4a200372cfa\") " Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.067530 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f4cc28-e60d-4c01-811a-b4a200372cfa-kube-api-access-tmplp" (OuterVolumeSpecName: "kube-api-access-tmplp") pod "86f4cc28-e60d-4c01-811a-b4a200372cfa" (UID: "86f4cc28-e60d-4c01-811a-b4a200372cfa"). InnerVolumeSpecName "kube-api-access-tmplp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.113561 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86f4cc28-e60d-4c01-811a-b4a200372cfa" (UID: "86f4cc28-e60d-4c01-811a-b4a200372cfa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.118460 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86f4cc28-e60d-4c01-811a-b4a200372cfa" (UID: "86f4cc28-e60d-4c01-811a-b4a200372cfa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.121350 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86f4cc28-e60d-4c01-811a-b4a200372cfa" (UID: "86f4cc28-e60d-4c01-811a-b4a200372cfa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.129937 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-config" (OuterVolumeSpecName: "config") pod "86f4cc28-e60d-4c01-811a-b4a200372cfa" (UID: "86f4cc28-e60d-4c01-811a-b4a200372cfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.163429 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.163465 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.163475 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.163485 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmplp\" (UniqueName: \"kubernetes.io/projected/86f4cc28-e60d-4c01-811a-b4a200372cfa-kube-api-access-tmplp\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.163495 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f4cc28-e60d-4c01-811a-b4a200372cfa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.329301 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerStarted","Data":"13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57"} Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.332707 5031 generic.go:334] "Generic (PLEG): container finished" podID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerID="018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004" exitCode=0 Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.332794 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" event={"ID":"86f4cc28-e60d-4c01-811a-b4a200372cfa","Type":"ContainerDied","Data":"018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004"} Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.332835 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" event={"ID":"86f4cc28-e60d-4c01-811a-b4a200372cfa","Type":"ContainerDied","Data":"ea2c9caccdcdc3712b7f27808123072411e6ec1ec53972292291b389bbe80d4b"} Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.332860 5031 scope.go:117] "RemoveContainer" containerID="018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.333079 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-p92v8" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.354433 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tvcms" event={"ID":"7b2e0d86-555d-42e1-beca-00cd83b2c90a","Type":"ContainerStarted","Data":"a35c5ca26395119ecd0d07528d5193eabbfe462f97210e6471ebf1fdacc31273"} Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.354491 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tvcms" event={"ID":"7b2e0d86-555d-42e1-beca-00cd83b2c90a","Type":"ContainerStarted","Data":"e2f4c4b13f3e0e4e248dec9c3117eb85fc76a3eeb6afb4d9495fe1e8ab83e3ad"} Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.378448 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-p92v8"] Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.393255 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-p92v8"] Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.396021 5031 scope.go:117] "RemoveContainer" containerID="5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.396344 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tvcms" podStartSLOduration=2.396321854 podStartE2EDuration="2.396321854s" podCreationTimestamp="2026-01-29 09:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:04.380499886 +0000 UTC m=+1404.880087838" watchObservedRunningTime="2026-01-29 09:02:04.396321854 +0000 UTC m=+1404.895909806" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.425063 5031 scope.go:117] "RemoveContainer" containerID="018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004" Jan 29 09:02:04 crc kubenswrapper[5031]: E0129 09:02:04.425948 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004\": container with ID starting with 018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004 not found: ID does not exist" containerID="018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.425986 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004"} err="failed to get container status \"018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004\": rpc error: code = NotFound desc = could not find container \"018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004\": container with ID starting with 018c34018cb9b1a94a99303c566168d13077dec9f11dec373c7aeb824e9d7004 not found: ID does not exist" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.426009 5031 scope.go:117] "RemoveContainer" containerID="5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567" Jan 29 09:02:04 crc kubenswrapper[5031]: E0129 09:02:04.426493 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567\": container with ID starting with 5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567 not found: ID does not exist" containerID="5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567" Jan 29 09:02:04 crc kubenswrapper[5031]: I0129 09:02:04.426588 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567"} err="failed to get container status \"5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567\": rpc error: code = NotFound desc = could not find container \"5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567\": container with ID starting with 5df2d337847dbc6abd77bdb5082d8c4149c3a7bb7a8d0363d5a7c690e0720567 not found: ID does not exist" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.270354 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hv66j"] Jan 29 09:02:06 crc kubenswrapper[5031]: E0129 09:02:06.272553 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerName="dnsmasq-dns" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.272704 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerName="dnsmasq-dns" Jan 29 09:02:06 crc kubenswrapper[5031]: E0129 09:02:06.272842 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerName="init" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.272925 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerName="init" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.273237 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f4cc28-e60d-4c01-811a-b4a200372cfa" containerName="dnsmasq-dns" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.277098 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.344593 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f4cc28-e60d-4c01-811a-b4a200372cfa" path="/var/lib/kubelet/pods/86f4cc28-e60d-4c01-811a-b4a200372cfa/volumes" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.347188 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hv66j"] Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.389703 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerStarted","Data":"04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b"} Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.411096 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-utilities\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.411182 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-catalog-content\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.411265 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62k8\" (UniqueName: \"kubernetes.io/projected/210be178-55f1-4dd1-9fdd-cd13745289e2-kube-api-access-n62k8\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.425620 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.318777798 podStartE2EDuration="6.425551451s" podCreationTimestamp="2026-01-29 09:02:00 +0000 UTC" firstStartedPulling="2026-01-29 09:02:01.244093325 +0000 UTC m=+1401.743681287" lastFinishedPulling="2026-01-29 09:02:05.350866988 +0000 UTC m=+1405.850454940" observedRunningTime="2026-01-29 09:02:06.411921253 +0000 UTC m=+1406.911509225" watchObservedRunningTime="2026-01-29 09:02:06.425551451 +0000 UTC m=+1406.925139413" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.512936 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-utilities\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.513432 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-catalog-content\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.513493 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62k8\" (UniqueName: \"kubernetes.io/projected/210be178-55f1-4dd1-9fdd-cd13745289e2-kube-api-access-n62k8\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.513357 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-utilities\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.514287 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-catalog-content\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.538416 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62k8\" (UniqueName: \"kubernetes.io/projected/210be178-55f1-4dd1-9fdd-cd13745289e2-kube-api-access-n62k8\") pod \"redhat-operators-hv66j\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:06 crc kubenswrapper[5031]: I0129 09:02:06.610267 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:07 crc kubenswrapper[5031]: I0129 09:02:07.111678 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hv66j"] Jan 29 09:02:07 crc kubenswrapper[5031]: W0129 09:02:07.131308 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod210be178_55f1_4dd1_9fdd_cd13745289e2.slice/crio-194002355c90d85c57ef5299767b84729c5912cee9c20c094be9bb3252905d16 WatchSource:0}: Error finding container 194002355c90d85c57ef5299767b84729c5912cee9c20c094be9bb3252905d16: Status 404 returned error can't find the container with id 194002355c90d85c57ef5299767b84729c5912cee9c20c094be9bb3252905d16 Jan 29 09:02:07 crc kubenswrapper[5031]: I0129 09:02:07.400259 5031 generic.go:334] "Generic (PLEG): container finished" podID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerID="7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b" exitCode=0 Jan 29 09:02:07 crc kubenswrapper[5031]: I0129 09:02:07.400300 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv66j" event={"ID":"210be178-55f1-4dd1-9fdd-cd13745289e2","Type":"ContainerDied","Data":"7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b"} Jan 29 09:02:07 crc kubenswrapper[5031]: I0129 09:02:07.400341 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv66j" event={"ID":"210be178-55f1-4dd1-9fdd-cd13745289e2","Type":"ContainerStarted","Data":"194002355c90d85c57ef5299767b84729c5912cee9c20c094be9bb3252905d16"} Jan 29 09:02:07 crc kubenswrapper[5031]: I0129 09:02:07.400709 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:02:08 crc kubenswrapper[5031]: I0129 09:02:08.413623 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv66j" event={"ID":"210be178-55f1-4dd1-9fdd-cd13745289e2","Type":"ContainerStarted","Data":"2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d"} Jan 29 09:02:09 crc kubenswrapper[5031]: I0129 09:02:09.432506 5031 generic.go:334] "Generic (PLEG): container finished" podID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerID="2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d" exitCode=0 Jan 29 09:02:09 crc kubenswrapper[5031]: I0129 09:02:09.432626 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv66j" event={"ID":"210be178-55f1-4dd1-9fdd-cd13745289e2","Type":"ContainerDied","Data":"2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d"} Jan 29 09:02:09 crc kubenswrapper[5031]: I0129 09:02:09.978158 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:02:09 crc kubenswrapper[5031]: I0129 09:02:09.978559 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:02:10 crc kubenswrapper[5031]: I0129 09:02:10.444501 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv66j" event={"ID":"210be178-55f1-4dd1-9fdd-cd13745289e2","Type":"ContainerStarted","Data":"2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11"} Jan 29 09:02:10 crc kubenswrapper[5031]: E0129 09:02:10.887389 5031 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b2e0d86_555d_42e1_beca_00cd83b2c90a.slice/crio-a35c5ca26395119ecd0d07528d5193eabbfe462f97210e6471ebf1fdacc31273.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:02:10 crc kubenswrapper[5031]: I0129 09:02:10.989678 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:02:10 crc kubenswrapper[5031]: I0129 09:02:10.990231 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:02:11 crc kubenswrapper[5031]: I0129 09:02:11.459012 5031 generic.go:334] "Generic (PLEG): container finished" podID="7b2e0d86-555d-42e1-beca-00cd83b2c90a" containerID="a35c5ca26395119ecd0d07528d5193eabbfe462f97210e6471ebf1fdacc31273" exitCode=0 Jan 29 09:02:11 crc kubenswrapper[5031]: I0129 09:02:11.459149 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tvcms" event={"ID":"7b2e0d86-555d-42e1-beca-00cd83b2c90a","Type":"ContainerDied","Data":"a35c5ca26395119ecd0d07528d5193eabbfe462f97210e6471ebf1fdacc31273"} Jan 29 09:02:11 crc kubenswrapper[5031]: I0129 09:02:11.491792 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hv66j" podStartSLOduration=2.8837167040000002 podStartE2EDuration="5.491768711s" podCreationTimestamp="2026-01-29 09:02:06 +0000 UTC" firstStartedPulling="2026-01-29 09:02:07.403724075 +0000 UTC m=+1407.903312037" lastFinishedPulling="2026-01-29 09:02:10.011776102 +0000 UTC m=+1410.511364044" observedRunningTime="2026-01-29 09:02:11.485598603 +0000 UTC m=+1411.985186565" watchObservedRunningTime="2026-01-29 09:02:11.491768711 +0000 UTC m=+1411.991356663" Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.827521 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.954919 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-combined-ca-bundle\") pod \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.955048 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5rhl\" (UniqueName: \"kubernetes.io/projected/7b2e0d86-555d-42e1-beca-00cd83b2c90a-kube-api-access-c5rhl\") pod \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.955072 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-config-data\") pod \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.955137 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-scripts\") pod \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\" (UID: \"7b2e0d86-555d-42e1-beca-00cd83b2c90a\") " Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.962391 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2e0d86-555d-42e1-beca-00cd83b2c90a-kube-api-access-c5rhl" (OuterVolumeSpecName: "kube-api-access-c5rhl") pod "7b2e0d86-555d-42e1-beca-00cd83b2c90a" (UID: "7b2e0d86-555d-42e1-beca-00cd83b2c90a"). InnerVolumeSpecName "kube-api-access-c5rhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.974554 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-scripts" (OuterVolumeSpecName: "scripts") pod "7b2e0d86-555d-42e1-beca-00cd83b2c90a" (UID: "7b2e0d86-555d-42e1-beca-00cd83b2c90a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.983059 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-config-data" (OuterVolumeSpecName: "config-data") pod "7b2e0d86-555d-42e1-beca-00cd83b2c90a" (UID: "7b2e0d86-555d-42e1-beca-00cd83b2c90a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:12 crc kubenswrapper[5031]: I0129 09:02:12.991457 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b2e0d86-555d-42e1-beca-00cd83b2c90a" (UID: "7b2e0d86-555d-42e1-beca-00cd83b2c90a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.059111 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.059200 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.059217 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5rhl\" (UniqueName: \"kubernetes.io/projected/7b2e0d86-555d-42e1-beca-00cd83b2c90a-kube-api-access-c5rhl\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.059228 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2e0d86-555d-42e1-beca-00cd83b2c90a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.481804 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tvcms" event={"ID":"7b2e0d86-555d-42e1-beca-00cd83b2c90a","Type":"ContainerDied","Data":"e2f4c4b13f3e0e4e248dec9c3117eb85fc76a3eeb6afb4d9495fe1e8ab83e3ad"} Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.481855 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f4c4b13f3e0e4e248dec9c3117eb85fc76a3eeb6afb4d9495fe1e8ab83e3ad" Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.481943 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tvcms" Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.709268 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.709862 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-log" containerID="cri-o://b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175" gracePeriod=30 Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.709958 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-api" containerID="cri-o://69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db" gracePeriod=30 Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.801950 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.802236 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e89e2668-4736-4be0-b913-4dbf458784e3" containerName="nova-scheduler-scheduler" containerID="cri-o://3645e7083c72b0aae5df0626f56b865340ea17eba3edde86617311dee41f0ee8" gracePeriod=30 Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.823742 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.824022 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-log" containerID="cri-o://7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45" gracePeriod=30 Jan 29 09:02:13 crc kubenswrapper[5031]: I0129 09:02:13.824714 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-metadata" containerID="cri-o://c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5" gracePeriod=30 Jan 29 09:02:14 crc kubenswrapper[5031]: I0129 09:02:14.498924 5031 generic.go:334] "Generic (PLEG): container finished" podID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerID="b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175" exitCode=143 Jan 29 09:02:14 crc kubenswrapper[5031]: I0129 09:02:14.499041 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b67e1ca-9d19-4489-a44d-03e70de4854a","Type":"ContainerDied","Data":"b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175"} Jan 29 09:02:14 crc kubenswrapper[5031]: I0129 09:02:14.502634 5031 generic.go:334] "Generic (PLEG): container finished" podID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerID="7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45" exitCode=143 Jan 29 09:02:14 crc kubenswrapper[5031]: I0129 09:02:14.502686 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbad90a9-72c9-4b29-9169-3650a4769ffb","Type":"ContainerDied","Data":"7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45"} Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.524816 5031 generic.go:334] "Generic (PLEG): container finished" podID="e89e2668-4736-4be0-b913-4dbf458784e3" containerID="3645e7083c72b0aae5df0626f56b865340ea17eba3edde86617311dee41f0ee8" exitCode=0 Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.525020 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e89e2668-4736-4be0-b913-4dbf458784e3","Type":"ContainerDied","Data":"3645e7083c72b0aae5df0626f56b865340ea17eba3edde86617311dee41f0ee8"} Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.610682 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.610766 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.661479 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.693611 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.839234 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-combined-ca-bundle\") pod \"e89e2668-4736-4be0-b913-4dbf458784e3\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.839462 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sdqr\" (UniqueName: \"kubernetes.io/projected/e89e2668-4736-4be0-b913-4dbf458784e3-kube-api-access-2sdqr\") pod \"e89e2668-4736-4be0-b913-4dbf458784e3\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.839515 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-config-data\") pod \"e89e2668-4736-4be0-b913-4dbf458784e3\" (UID: \"e89e2668-4736-4be0-b913-4dbf458784e3\") " Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.860783 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89e2668-4736-4be0-b913-4dbf458784e3-kube-api-access-2sdqr" (OuterVolumeSpecName: "kube-api-access-2sdqr") pod "e89e2668-4736-4be0-b913-4dbf458784e3" (UID: "e89e2668-4736-4be0-b913-4dbf458784e3"). InnerVolumeSpecName "kube-api-access-2sdqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.906195 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e89e2668-4736-4be0-b913-4dbf458784e3" (UID: "e89e2668-4736-4be0-b913-4dbf458784e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.906268 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-config-data" (OuterVolumeSpecName: "config-data") pod "e89e2668-4736-4be0-b913-4dbf458784e3" (UID: "e89e2668-4736-4be0-b913-4dbf458784e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.942124 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.942237 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sdqr\" (UniqueName: \"kubernetes.io/projected/e89e2668-4736-4be0-b913-4dbf458784e3-kube-api-access-2sdqr\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:16 crc kubenswrapper[5031]: I0129 09:02:16.942275 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89e2668-4736-4be0-b913-4dbf458784e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.291300 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.351328 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-combined-ca-bundle\") pod \"9b67e1ca-9d19-4489-a44d-03e70de4854a\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.351527 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbhxh\" (UniqueName: \"kubernetes.io/projected/9b67e1ca-9d19-4489-a44d-03e70de4854a-kube-api-access-nbhxh\") pod \"9b67e1ca-9d19-4489-a44d-03e70de4854a\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.351571 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b67e1ca-9d19-4489-a44d-03e70de4854a-logs\") pod \"9b67e1ca-9d19-4489-a44d-03e70de4854a\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.351600 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-public-tls-certs\") pod \"9b67e1ca-9d19-4489-a44d-03e70de4854a\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.351671 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-internal-tls-certs\") pod \"9b67e1ca-9d19-4489-a44d-03e70de4854a\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.351855 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-config-data\") pod \"9b67e1ca-9d19-4489-a44d-03e70de4854a\" (UID: \"9b67e1ca-9d19-4489-a44d-03e70de4854a\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.356557 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b67e1ca-9d19-4489-a44d-03e70de4854a-logs" (OuterVolumeSpecName: "logs") pod "9b67e1ca-9d19-4489-a44d-03e70de4854a" (UID: "9b67e1ca-9d19-4489-a44d-03e70de4854a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.361441 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b67e1ca-9d19-4489-a44d-03e70de4854a-kube-api-access-nbhxh" (OuterVolumeSpecName: "kube-api-access-nbhxh") pod "9b67e1ca-9d19-4489-a44d-03e70de4854a" (UID: "9b67e1ca-9d19-4489-a44d-03e70de4854a"). InnerVolumeSpecName "kube-api-access-nbhxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.392526 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b67e1ca-9d19-4489-a44d-03e70de4854a" (UID: "9b67e1ca-9d19-4489-a44d-03e70de4854a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.394588 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-config-data" (OuterVolumeSpecName: "config-data") pod "9b67e1ca-9d19-4489-a44d-03e70de4854a" (UID: "9b67e1ca-9d19-4489-a44d-03e70de4854a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.420811 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b67e1ca-9d19-4489-a44d-03e70de4854a" (UID: "9b67e1ca-9d19-4489-a44d-03e70de4854a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.432000 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9b67e1ca-9d19-4489-a44d-03e70de4854a" (UID: "9b67e1ca-9d19-4489-a44d-03e70de4854a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.457422 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.457462 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.457486 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbhxh\" (UniqueName: \"kubernetes.io/projected/9b67e1ca-9d19-4489-a44d-03e70de4854a-kube-api-access-nbhxh\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.457500 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b67e1ca-9d19-4489-a44d-03e70de4854a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.457511 5031 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.457523 5031 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b67e1ca-9d19-4489-a44d-03e70de4854a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.532835 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.543873 5031 generic.go:334] "Generic (PLEG): container finished" podID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerID="c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5" exitCode=0 Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.543935 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.543952 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbad90a9-72c9-4b29-9169-3650a4769ffb","Type":"ContainerDied","Data":"c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5"} Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.544419 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbad90a9-72c9-4b29-9169-3650a4769ffb","Type":"ContainerDied","Data":"206638579a91195d68db4524e1e92af8c3193e48e7b27651b6b9c7f0e62eebf8"} Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.544436 5031 scope.go:117] "RemoveContainer" containerID="c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.547315 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.547329 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e89e2668-4736-4be0-b913-4dbf458784e3","Type":"ContainerDied","Data":"076782b14a7a4244574f712fcfdec3d4b8eee0aa610aab3802570b43840e12db"} Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.552216 5031 generic.go:334] "Generic (PLEG): container finished" podID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerID="69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db" exitCode=0 Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.552640 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.552719 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b67e1ca-9d19-4489-a44d-03e70de4854a","Type":"ContainerDied","Data":"69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db"} Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.552782 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b67e1ca-9d19-4489-a44d-03e70de4854a","Type":"ContainerDied","Data":"97dd260cbe1a433599c587761f2d9854f0532c574db6701a8524472148eb870e"} Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.610549 5031 scope.go:117] "RemoveContainer" containerID="7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.610732 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.632381 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.646931 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.656213 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.657253 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-log" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657280 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-log" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.657310 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2e0d86-555d-42e1-beca-00cd83b2c90a" containerName="nova-manage" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657318 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2e0d86-555d-42e1-beca-00cd83b2c90a" containerName="nova-manage" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.657328 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-api" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657336 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-api" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.657357 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-log" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657663 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-log" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.657678 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-metadata" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657684 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-metadata" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.657691 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89e2668-4736-4be0-b913-4dbf458784e3" containerName="nova-scheduler-scheduler" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657699 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89e2668-4736-4be0-b913-4dbf458784e3" containerName="nova-scheduler-scheduler" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657921 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89e2668-4736-4be0-b913-4dbf458784e3" containerName="nova-scheduler-scheduler" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657938 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-log" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657956 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" containerName="nova-api-api" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657966 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2e0d86-555d-42e1-beca-00cd83b2c90a" containerName="nova-manage" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657974 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-log" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.657981 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" containerName="nova-metadata-metadata" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.659100 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.677899 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.678093 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.678236 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.679609 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-config-data\") pod \"bbad90a9-72c9-4b29-9169-3650a4769ffb\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.679666 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-combined-ca-bundle\") pod \"bbad90a9-72c9-4b29-9169-3650a4769ffb\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.679687 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbad90a9-72c9-4b29-9169-3650a4769ffb-logs\") pod \"bbad90a9-72c9-4b29-9169-3650a4769ffb\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.679704 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7krt6\" (UniqueName: \"kubernetes.io/projected/bbad90a9-72c9-4b29-9169-3650a4769ffb-kube-api-access-7krt6\") pod \"bbad90a9-72c9-4b29-9169-3650a4769ffb\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.679872 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-nova-metadata-tls-certs\") pod \"bbad90a9-72c9-4b29-9169-3650a4769ffb\" (UID: \"bbad90a9-72c9-4b29-9169-3650a4769ffb\") " Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.684190 5031 scope.go:117] "RemoveContainer" containerID="c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.684893 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbad90a9-72c9-4b29-9169-3650a4769ffb-logs" (OuterVolumeSpecName: "logs") pod "bbad90a9-72c9-4b29-9169-3650a4769ffb" (UID: "bbad90a9-72c9-4b29-9169-3650a4769ffb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.684959 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.686585 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5\": container with ID starting with c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5 not found: ID does not exist" containerID="c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.686639 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5"} err="failed to get container status \"c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5\": rpc error: code = NotFound desc = could not find container \"c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5\": container with ID starting with c737a2f5f6cf904fa6a14d04592060cd4403255c0a306340952c030aa1795ed5 not found: ID does not exist" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.686668 5031 scope.go:117] "RemoveContainer" containerID="7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.689810 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45\": container with ID starting with 7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45 not found: ID does not exist" containerID="7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.689863 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45"} err="failed to get container status \"7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45\": rpc error: code = NotFound desc = could not find container \"7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45\": container with ID starting with 7fd92af853017f1b934bfc5c4f9c880d4d44058867ce3be7cc64ba91bf671e45 not found: ID does not exist" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.689911 5031 scope.go:117] "RemoveContainer" containerID="3645e7083c72b0aae5df0626f56b865340ea17eba3edde86617311dee41f0ee8" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.690054 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbad90a9-72c9-4b29-9169-3650a4769ffb-kube-api-access-7krt6" (OuterVolumeSpecName: "kube-api-access-7krt6") pod "bbad90a9-72c9-4b29-9169-3650a4769ffb" (UID: "bbad90a9-72c9-4b29-9169-3650a4769ffb"). InnerVolumeSpecName "kube-api-access-7krt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.694205 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.739584 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbad90a9-72c9-4b29-9169-3650a4769ffb" (UID: "bbad90a9-72c9-4b29-9169-3650a4769ffb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.745498 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-config-data" (OuterVolumeSpecName: "config-data") pod "bbad90a9-72c9-4b29-9169-3650a4769ffb" (UID: "bbad90a9-72c9-4b29-9169-3650a4769ffb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.745567 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.759125 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bbad90a9-72c9-4b29-9169-3650a4769ffb" (UID: "bbad90a9-72c9-4b29-9169-3650a4769ffb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.770762 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.773093 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.777781 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782244 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d916f0-bb3a-45de-b176-616bd8a170e4-logs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782345 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782435 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-config-data\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782490 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782542 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnk7f\" (UniqueName: \"kubernetes.io/projected/92d916f0-bb3a-45de-b176-616bd8a170e4-kube-api-access-bnk7f\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782589 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782668 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782688 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782702 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbad90a9-72c9-4b29-9169-3650a4769ffb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782713 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7krt6\" (UniqueName: \"kubernetes.io/projected/bbad90a9-72c9-4b29-9169-3650a4769ffb-kube-api-access-7krt6\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782725 5031 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbad90a9-72c9-4b29-9169-3650a4769ffb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.782806 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.821084 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hv66j"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.827778 5031 scope.go:117] "RemoveContainer" containerID="69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.858388 5031 scope.go:117] "RemoveContainer" containerID="b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.885488 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.885615 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-config-data\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.885650 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57931a94-e323-4a04-915d-735dc7a09030-config-data\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.885713 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.885759 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57931a94-e323-4a04-915d-735dc7a09030-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.885804 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnk7f\" (UniqueName: \"kubernetes.io/projected/92d916f0-bb3a-45de-b176-616bd8a170e4-kube-api-access-bnk7f\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.886702 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.887762 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kqzp\" (UniqueName: \"kubernetes.io/projected/57931a94-e323-4a04-915d-735dc7a09030-kube-api-access-7kqzp\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.887826 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.887932 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d916f0-bb3a-45de-b176-616bd8a170e4-logs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.888582 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d916f0-bb3a-45de-b176-616bd8a170e4-logs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.891511 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.894434 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.894709 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-config-data\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.899748 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d916f0-bb3a-45de-b176-616bd8a170e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.909633 5031 scope.go:117] "RemoveContainer" containerID="69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.912321 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db\": container with ID starting with 69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db not found: ID does not exist" containerID="69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.912404 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db"} err="failed to get container status \"69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db\": rpc error: code = NotFound desc = could not find container \"69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db\": container with ID starting with 69568ccfb2ac4167c26ffe8bf466418ec1a7e64e7bd9081eda37e685a29594db not found: ID does not exist" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.912435 5031 scope.go:117] "RemoveContainer" containerID="b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175" Jan 29 09:02:17 crc kubenswrapper[5031]: E0129 09:02:17.913044 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175\": container with ID starting with b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175 not found: ID does not exist" containerID="b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.913186 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175"} err="failed to get container status \"b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175\": rpc error: code = NotFound desc = could not find container \"b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175\": container with ID starting with b6934842e30efd10691f10cbebcc93dae60da27c55b70c92e9415eccdb269175 not found: ID does not exist" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.914159 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnk7f\" (UniqueName: \"kubernetes.io/projected/92d916f0-bb3a-45de-b176-616bd8a170e4-kube-api-access-bnk7f\") pod \"nova-api-0\" (UID: \"92d916f0-bb3a-45de-b176-616bd8a170e4\") " pod="openstack/nova-api-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.914635 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.949087 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.950954 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.953818 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.953825 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.962572 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.989817 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57931a94-e323-4a04-915d-735dc7a09030-config-data\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.989935 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57931a94-e323-4a04-915d-735dc7a09030-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.990037 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kqzp\" (UniqueName: \"kubernetes.io/projected/57931a94-e323-4a04-915d-735dc7a09030-kube-api-access-7kqzp\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.993648 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57931a94-e323-4a04-915d-735dc7a09030-config-data\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:17 crc kubenswrapper[5031]: I0129 09:02:17.993783 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57931a94-e323-4a04-915d-735dc7a09030-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.010571 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kqzp\" (UniqueName: \"kubernetes.io/projected/57931a94-e323-4a04-915d-735dc7a09030-kube-api-access-7kqzp\") pod \"nova-scheduler-0\" (UID: \"57931a94-e323-4a04-915d-735dc7a09030\") " pod="openstack/nova-scheduler-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.091416 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.091536 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.091690 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5vrs\" (UniqueName: \"kubernetes.io/projected/97911fdf-2136-4700-8474-d165d6de4c33-kube-api-access-c5vrs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.091763 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97911fdf-2136-4700-8474-d165d6de4c33-logs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.091809 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-config-data\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.110617 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.119394 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.193525 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.193885 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.193958 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5vrs\" (UniqueName: \"kubernetes.io/projected/97911fdf-2136-4700-8474-d165d6de4c33-kube-api-access-c5vrs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.194018 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97911fdf-2136-4700-8474-d165d6de4c33-logs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.194040 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-config-data\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.194944 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97911fdf-2136-4700-8474-d165d6de4c33-logs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.200731 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.201149 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-config-data\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.204176 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97911fdf-2136-4700-8474-d165d6de4c33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.213397 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5vrs\" (UniqueName: \"kubernetes.io/projected/97911fdf-2136-4700-8474-d165d6de4c33-kube-api-access-c5vrs\") pod \"nova-metadata-0\" (UID: \"97911fdf-2136-4700-8474-d165d6de4c33\") " pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.279569 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.314269 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b67e1ca-9d19-4489-a44d-03e70de4854a" path="/var/lib/kubelet/pods/9b67e1ca-9d19-4489-a44d-03e70de4854a/volumes" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.315151 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbad90a9-72c9-4b29-9169-3650a4769ffb" path="/var/lib/kubelet/pods/bbad90a9-72c9-4b29-9169-3650a4769ffb/volumes" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.315819 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89e2668-4736-4be0-b913-4dbf458784e3" path="/var/lib/kubelet/pods/e89e2668-4736-4be0-b913-4dbf458784e3/volumes" Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.625734 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:02:18 crc kubenswrapper[5031]: W0129 09:02:18.859301 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57931a94_e323_4a04_915d_735dc7a09030.slice/crio-287a2ac239b7db2aaebcf6ee3893e321302a62cb35c287d69227e5c4f3019b7f WatchSource:0}: Error finding container 287a2ac239b7db2aaebcf6ee3893e321302a62cb35c287d69227e5c4f3019b7f: Status 404 returned error can't find the container with id 287a2ac239b7db2aaebcf6ee3893e321302a62cb35c287d69227e5c4f3019b7f Jan 29 09:02:18 crc kubenswrapper[5031]: I0129 09:02:18.862078 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.062470 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:02:19 crc kubenswrapper[5031]: W0129 09:02:19.068757 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97911fdf_2136_4700_8474_d165d6de4c33.slice/crio-37b0dc9bef4c4ea2daf71ef602346cb4094594058da4e9b821c6e6ddd08d82d6 WatchSource:0}: Error finding container 37b0dc9bef4c4ea2daf71ef602346cb4094594058da4e9b821c6e6ddd08d82d6: Status 404 returned error can't find the container with id 37b0dc9bef4c4ea2daf71ef602346cb4094594058da4e9b821c6e6ddd08d82d6 Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.609510 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57931a94-e323-4a04-915d-735dc7a09030","Type":"ContainerStarted","Data":"a82bf3d0b14aab5f65d120cf05c018e39a4c39424d7d05d4310e785107cd9a44"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.609917 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57931a94-e323-4a04-915d-735dc7a09030","Type":"ContainerStarted","Data":"287a2ac239b7db2aaebcf6ee3893e321302a62cb35c287d69227e5c4f3019b7f"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.611302 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"92d916f0-bb3a-45de-b176-616bd8a170e4","Type":"ContainerStarted","Data":"e6f381eac1f338a88fed1671115b161896949f61887f2dc2dc937682324d47a7"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.611334 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"92d916f0-bb3a-45de-b176-616bd8a170e4","Type":"ContainerStarted","Data":"0f0e7357bef925f251881ce3ab75fc48bc9dd7c0b48ed51907754862c6e67fe6"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.611347 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"92d916f0-bb3a-45de-b176-616bd8a170e4","Type":"ContainerStarted","Data":"8a2ba32f8f3ece2ecbbe051ae9f19b2faa66cdd175ce5ae23bef76f639ec6f3a"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.614249 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97911fdf-2136-4700-8474-d165d6de4c33","Type":"ContainerStarted","Data":"73f5963791752e08aba192f096242772d55f97032a47875aabf66daff7080d41"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.614280 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97911fdf-2136-4700-8474-d165d6de4c33","Type":"ContainerStarted","Data":"e1cfd763cd242775807c926cc85e513d3a57ffacf264b86a089721772f35279a"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.614295 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97911fdf-2136-4700-8474-d165d6de4c33","Type":"ContainerStarted","Data":"37b0dc9bef4c4ea2daf71ef602346cb4094594058da4e9b821c6e6ddd08d82d6"} Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.614274 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hv66j" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="registry-server" containerID="cri-o://2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11" gracePeriod=2 Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.645073 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.645051216 podStartE2EDuration="2.645051216s" podCreationTimestamp="2026-01-29 09:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:19.630591005 +0000 UTC m=+1420.130178957" watchObservedRunningTime="2026-01-29 09:02:19.645051216 +0000 UTC m=+1420.144639168" Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.657454 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.657427471 podStartE2EDuration="2.657427471s" podCreationTimestamp="2026-01-29 09:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:19.653081044 +0000 UTC m=+1420.152669026" watchObservedRunningTime="2026-01-29 09:02:19.657427471 +0000 UTC m=+1420.157015423" Jan 29 09:02:19 crc kubenswrapper[5031]: I0129 09:02:19.677545 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.677528265 podStartE2EDuration="2.677528265s" podCreationTimestamp="2026-01-29 09:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:19.673109255 +0000 UTC m=+1420.172697207" watchObservedRunningTime="2026-01-29 09:02:19.677528265 +0000 UTC m=+1420.177116217" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.102115 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.142864 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-utilities\") pod \"210be178-55f1-4dd1-9fdd-cd13745289e2\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.143345 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-catalog-content\") pod \"210be178-55f1-4dd1-9fdd-cd13745289e2\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.143445 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n62k8\" (UniqueName: \"kubernetes.io/projected/210be178-55f1-4dd1-9fdd-cd13745289e2-kube-api-access-n62k8\") pod \"210be178-55f1-4dd1-9fdd-cd13745289e2\" (UID: \"210be178-55f1-4dd1-9fdd-cd13745289e2\") " Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.143959 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-utilities" (OuterVolumeSpecName: "utilities") pod "210be178-55f1-4dd1-9fdd-cd13745289e2" (UID: "210be178-55f1-4dd1-9fdd-cd13745289e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.152530 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210be178-55f1-4dd1-9fdd-cd13745289e2-kube-api-access-n62k8" (OuterVolumeSpecName: "kube-api-access-n62k8") pod "210be178-55f1-4dd1-9fdd-cd13745289e2" (UID: "210be178-55f1-4dd1-9fdd-cd13745289e2"). InnerVolumeSpecName "kube-api-access-n62k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.245705 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.245748 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n62k8\" (UniqueName: \"kubernetes.io/projected/210be178-55f1-4dd1-9fdd-cd13745289e2-kube-api-access-n62k8\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.308432 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "210be178-55f1-4dd1-9fdd-cd13745289e2" (UID: "210be178-55f1-4dd1-9fdd-cd13745289e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.347809 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210be178-55f1-4dd1-9fdd-cd13745289e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.626819 5031 generic.go:334] "Generic (PLEG): container finished" podID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerID="2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11" exitCode=0 Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.627716 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv66j" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.632352 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv66j" event={"ID":"210be178-55f1-4dd1-9fdd-cd13745289e2","Type":"ContainerDied","Data":"2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11"} Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.632510 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv66j" event={"ID":"210be178-55f1-4dd1-9fdd-cd13745289e2","Type":"ContainerDied","Data":"194002355c90d85c57ef5299767b84729c5912cee9c20c094be9bb3252905d16"} Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.632545 5031 scope.go:117] "RemoveContainer" containerID="2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.699635 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hv66j"] Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.699669 5031 scope.go:117] "RemoveContainer" containerID="2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.714407 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hv66j"] Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.729163 5031 scope.go:117] "RemoveContainer" containerID="7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.770974 5031 scope.go:117] "RemoveContainer" containerID="2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11" Jan 29 09:02:20 crc kubenswrapper[5031]: E0129 09:02:20.771566 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11\": container with ID starting with 2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11 not found: ID does not exist" containerID="2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.771606 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11"} err="failed to get container status \"2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11\": rpc error: code = NotFound desc = could not find container \"2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11\": container with ID starting with 2ee08898f17cc35aefaa87bfa7ca50820977b49953eccfafe1f6227f2c58cd11 not found: ID does not exist" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.771630 5031 scope.go:117] "RemoveContainer" containerID="2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d" Jan 29 09:02:20 crc kubenswrapper[5031]: E0129 09:02:20.772129 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d\": container with ID starting with 2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d not found: ID does not exist" containerID="2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.772156 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d"} err="failed to get container status \"2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d\": rpc error: code = NotFound desc = could not find container \"2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d\": container with ID starting with 2de7a9aa0269be6d6a7e8f2857a133a20f3be8712a468421b5ec88574deda62d not found: ID does not exist" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.772171 5031 scope.go:117] "RemoveContainer" containerID="7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b" Jan 29 09:02:20 crc kubenswrapper[5031]: E0129 09:02:20.772520 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b\": container with ID starting with 7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b not found: ID does not exist" containerID="7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b" Jan 29 09:02:20 crc kubenswrapper[5031]: I0129 09:02:20.772544 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b"} err="failed to get container status \"7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b\": rpc error: code = NotFound desc = could not find container \"7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b\": container with ID starting with 7a3120e1850a2a36b56928ff12bb4c344ae0073b2b70921fa5f28b2424bd660b not found: ID does not exist" Jan 29 09:02:22 crc kubenswrapper[5031]: I0129 09:02:22.293708 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" path="/var/lib/kubelet/pods/210be178-55f1-4dd1-9fdd-cd13745289e2/volumes" Jan 29 09:02:23 crc kubenswrapper[5031]: I0129 09:02:23.121208 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:02:23 crc kubenswrapper[5031]: I0129 09:02:23.280424 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:02:23 crc kubenswrapper[5031]: I0129 09:02:23.280473 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:02:28 crc kubenswrapper[5031]: I0129 09:02:28.111214 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:02:28 crc kubenswrapper[5031]: I0129 09:02:28.111774 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:02:28 crc kubenswrapper[5031]: I0129 09:02:28.122009 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:02:28 crc kubenswrapper[5031]: I0129 09:02:28.149799 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:02:28 crc kubenswrapper[5031]: I0129 09:02:28.280884 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:02:28 crc kubenswrapper[5031]: I0129 09:02:28.280934 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:02:28 crc kubenswrapper[5031]: I0129 09:02:28.734438 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:02:29 crc kubenswrapper[5031]: I0129 09:02:29.123537 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="92d916f0-bb3a-45de-b176-616bd8a170e4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:02:29 crc kubenswrapper[5031]: I0129 09:02:29.123537 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="92d916f0-bb3a-45de-b176-616bd8a170e4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:02:29 crc kubenswrapper[5031]: I0129 09:02:29.300676 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="97911fdf-2136-4700-8474-d165d6de4c33" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:02:29 crc kubenswrapper[5031]: I0129 09:02:29.300725 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="97911fdf-2136-4700-8474-d165d6de4c33" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:02:30 crc kubenswrapper[5031]: I0129 09:02:30.733927 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.118960 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.119579 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.120550 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.120768 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.127274 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.130795 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.296310 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.296358 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.306224 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:02:38 crc kubenswrapper[5031]: I0129 09:02:38.313329 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:02:46 crc kubenswrapper[5031]: I0129 09:02:46.957589 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:02:48 crc kubenswrapper[5031]: I0129 09:02:48.010623 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:02:51 crc kubenswrapper[5031]: I0129 09:02:51.288400 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerName="rabbitmq" containerID="cri-o://80604cfe1e2c531a86bec2175bc5f49c52d4518f6371c416470cd0abb4d2a830" gracePeriod=604796 Jan 29 09:02:52 crc kubenswrapper[5031]: I0129 09:02:52.320322 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="rabbitmq" containerID="cri-o://1e5eb5f612c550d875223b863d54744bd60785ca68ceb3514d702eb8f5ac5363" gracePeriod=604796 Jan 29 09:02:55 crc kubenswrapper[5031]: I0129 09:02:55.411227 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 29 09:02:56 crc kubenswrapper[5031]: I0129 09:02:56.034783 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.020931 5031 generic.go:334] "Generic (PLEG): container finished" podID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerID="80604cfe1e2c531a86bec2175bc5f49c52d4518f6371c416470cd0abb4d2a830" exitCode=0 Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.021031 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64621a94-8b58-4593-a9d0-58f0dd3c5e0f","Type":"ContainerDied","Data":"80604cfe1e2c531a86bec2175bc5f49c52d4518f6371c416470cd0abb4d2a830"} Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.021513 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64621a94-8b58-4593-a9d0-58f0dd3c5e0f","Type":"ContainerDied","Data":"572ee2637e3e4264d635a98edef3a7809ff321b7540668f27dbe885820462cfc"} Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.021533 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="572ee2637e3e4264d635a98edef3a7809ff321b7540668f27dbe885820462cfc" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.062067 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149250 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-server-conf\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149314 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-config-data\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149392 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-erlang-cookie\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149515 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149571 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-erlang-cookie-secret\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149634 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-confd\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149666 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-pod-info\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149711 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-plugins-conf\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149750 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmscd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-kube-api-access-fmscd\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149797 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-tls\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.149837 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-plugins\") pod \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\" (UID: \"64621a94-8b58-4593-a9d0-58f0dd3c5e0f\") " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.150853 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.153177 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.154150 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.157973 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-pod-info" (OuterVolumeSpecName: "pod-info") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.159484 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-kube-api-access-fmscd" (OuterVolumeSpecName: "kube-api-access-fmscd") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "kube-api-access-fmscd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.160583 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.160598 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.161906 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.189961 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-config-data" (OuterVolumeSpecName: "config-data") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.247179 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-server-conf" (OuterVolumeSpecName: "server-conf") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253226 5031 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253274 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmscd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-kube-api-access-fmscd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253288 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253299 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253307 5031 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253315 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253323 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253359 5031 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253386 5031 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.253394 5031 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.299982 5031 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.353600 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "64621a94-8b58-4593-a9d0-58f0dd3c5e0f" (UID: "64621a94-8b58-4593-a9d0-58f0dd3c5e0f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.354813 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64621a94-8b58-4593-a9d0-58f0dd3c5e0f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:58 crc kubenswrapper[5031]: I0129 09:02:58.354840 5031 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.038133 5031 generic.go:334] "Generic (PLEG): container finished" podID="a9e34c17-fba9-4efa-8912-ede69c516560" containerID="1e5eb5f612c550d875223b863d54744bd60785ca68ceb3514d702eb8f5ac5363" exitCode=0 Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.038219 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9e34c17-fba9-4efa-8912-ede69c516560","Type":"ContainerDied","Data":"1e5eb5f612c550d875223b863d54744bd60785ca68ceb3514d702eb8f5ac5363"} Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.038276 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.120805 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.137879 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.146664 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171218 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171342 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqkgd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-kube-api-access-nqkgd\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171422 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-plugins-conf\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171453 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e34c17-fba9-4efa-8912-ede69c516560-erlang-cookie-secret\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171476 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-tls\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171516 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-server-conf\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171577 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-config-data\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171672 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e34c17-fba9-4efa-8912-ede69c516560-pod-info\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171719 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-confd\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171745 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-erlang-cookie\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.171820 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-plugins\") pod \"a9e34c17-fba9-4efa-8912-ede69c516560\" (UID: \"a9e34c17-fba9-4efa-8912-ede69c516560\") " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.172598 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.172774 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.172779 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.172843 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.206016 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:02:59 crc kubenswrapper[5031]: E0129 09:02:59.207533 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerName="rabbitmq" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207555 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerName="rabbitmq" Jan 29 09:02:59 crc kubenswrapper[5031]: E0129 09:02:59.207573 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerName="setup-container" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207580 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerName="setup-container" Jan 29 09:02:59 crc kubenswrapper[5031]: E0129 09:02:59.207601 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="extract-content" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207609 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="extract-content" Jan 29 09:02:59 crc kubenswrapper[5031]: E0129 09:02:59.207622 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="setup-container" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207629 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="setup-container" Jan 29 09:02:59 crc kubenswrapper[5031]: E0129 09:02:59.207644 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="rabbitmq" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207653 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="rabbitmq" Jan 29 09:02:59 crc kubenswrapper[5031]: E0129 09:02:59.207686 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="extract-utilities" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207695 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="extract-utilities" Jan 29 09:02:59 crc kubenswrapper[5031]: E0129 09:02:59.207713 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="registry-server" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207720 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="registry-server" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207946 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="210be178-55f1-4dd1-9fdd-cd13745289e2" containerName="registry-server" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207966 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" containerName="rabbitmq" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.207981 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" containerName="rabbitmq" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.216014 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.242016 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e34c17-fba9-4efa-8912-ede69c516560-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.242612 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.242740 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.242847 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bkbtw" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.243091 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.243120 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.243329 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.244276 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a9e34c17-fba9-4efa-8912-ede69c516560-pod-info" (OuterVolumeSpecName: "pod-info") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.244587 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.248051 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.250562 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.254719 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-kube-api-access-nqkgd" (OuterVolumeSpecName: "kube-api-access-nqkgd") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "kube-api-access-nqkgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275636 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275716 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275741 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275766 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275801 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84hc\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-kube-api-access-b84hc\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275843 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275867 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275886 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-config-data\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275908 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275964 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.275985 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.276068 5031 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e34c17-fba9-4efa-8912-ede69c516560-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.276080 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.276104 5031 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.276113 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqkgd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-kube-api-access-nqkgd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.276122 5031 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.276133 5031 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e34c17-fba9-4efa-8912-ede69c516560-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.276141 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.287213 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-config-data" (OuterVolumeSpecName: "config-data") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.294341 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.329331 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-server-conf" (OuterVolumeSpecName: "server-conf") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.331986 5031 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381578 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381637 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381670 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381708 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84hc\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-kube-api-access-b84hc\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381758 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381813 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381840 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-config-data\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381865 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381906 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.381927 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.382011 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.382084 5031 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.382100 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e34c17-fba9-4efa-8912-ede69c516560-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.382112 5031 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.383635 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.384225 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.384678 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.385309 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.389322 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.391114 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-config-data\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.391184 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.394758 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.419506 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.419907 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.420136 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84hc\" (UniqueName: \"kubernetes.io/projected/6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73-kube-api-access-b84hc\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.438009 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a9e34c17-fba9-4efa-8912-ede69c516560" (UID: "a9e34c17-fba9-4efa-8912-ede69c516560"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.460293 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73\") " pod="openstack/rabbitmq-server-0" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.484226 5031 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e34c17-fba9-4efa-8912-ede69c516560-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:59 crc kubenswrapper[5031]: I0129 09:02:59.642062 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.049742 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9e34c17-fba9-4efa-8912-ede69c516560","Type":"ContainerDied","Data":"f17d93ce9752ba78dc25a03a48306c6a9300af971fdd648836fba60b20f4588b"} Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.049820 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.050123 5031 scope.go:117] "RemoveContainer" containerID="1e5eb5f612c550d875223b863d54744bd60785ca68ceb3514d702eb8f5ac5363" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.100406 5031 scope.go:117] "RemoveContainer" containerID="248333fd4f79e20db6d18e37d447343ffb055ab9198e066636271c6a0039cfcd" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.105219 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.130405 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.165447 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.167585 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.173327 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.174070 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.174220 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wdwz4" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.174578 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.185824 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.186001 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.186126 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.207972 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.228167 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.313518 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.313746 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtb5\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-kube-api-access-wbtb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.313841 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.313944 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.313980 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3af83c61-d4e1-4694-a820-1bb5529a2bce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.314088 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.314168 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.314260 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.314350 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.320010 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.331041 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3af83c61-d4e1-4694-a820-1bb5529a2bce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.354356 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64621a94-8b58-4593-a9d0-58f0dd3c5e0f" path="/var/lib/kubelet/pods/64621a94-8b58-4593-a9d0-58f0dd3c5e0f/volumes" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.356442 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e34c17-fba9-4efa-8912-ede69c516560" path="/var/lib/kubelet/pods/a9e34c17-fba9-4efa-8912-ede69c516560/volumes" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433603 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433669 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3af83c61-d4e1-4694-a820-1bb5529a2bce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433717 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433752 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtb5\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-kube-api-access-wbtb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433778 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433810 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433839 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3af83c61-d4e1-4694-a820-1bb5529a2bce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433861 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433876 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433900 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.433921 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.434623 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.434696 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.434724 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.435100 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.435412 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.435484 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3af83c61-d4e1-4694-a820-1bb5529a2bce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.437261 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.437809 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.438584 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3af83c61-d4e1-4694-a820-1bb5529a2bce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.439667 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3af83c61-d4e1-4694-a820-1bb5529a2bce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.453025 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtb5\" (UniqueName: \"kubernetes.io/projected/3af83c61-d4e1-4694-a820-1bb5529a2bce-kube-api-access-wbtb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.476537 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3af83c61-d4e1-4694-a820-1bb5529a2bce\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.532785 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:00 crc kubenswrapper[5031]: I0129 09:03:00.983842 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:03:01 crc kubenswrapper[5031]: I0129 09:03:01.064350 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3af83c61-d4e1-4694-a820-1bb5529a2bce","Type":"ContainerStarted","Data":"0d80ca09a7dbe14fcd4f425f2e1ba154e49a4bc6bd1ac7f3a1629b2c5420c61f"} Jan 29 09:03:01 crc kubenswrapper[5031]: I0129 09:03:01.068187 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73","Type":"ContainerStarted","Data":"bab9bcced1ecc911f8481d706388045eead39b2d183a465c6cc39f6b6597291b"} Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.084136 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73","Type":"ContainerStarted","Data":"0053ffb671a60a0380002bd39dfb22743eb066e57b4c84e7485fd9280f4aeb77"} Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.113444 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qndhw"] Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.124561 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.125895 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qndhw"] Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.129010 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.274766 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.275103 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.275214 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.275348 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59bpc\" (UniqueName: \"kubernetes.io/projected/6d23adb6-9455-44d0-a9d0-68bb335445d2-kube-api-access-59bpc\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.275402 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-config\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.275575 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.377279 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.377361 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.377428 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.377487 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59bpc\" (UniqueName: \"kubernetes.io/projected/6d23adb6-9455-44d0-a9d0-68bb335445d2-kube-api-access-59bpc\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.377536 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-config\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.377647 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.378483 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.379053 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.379396 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.379697 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.379762 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-config\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.461839 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59bpc\" (UniqueName: \"kubernetes.io/projected/6d23adb6-9455-44d0-a9d0-68bb335445d2-kube-api-access-59bpc\") pod \"dnsmasq-dns-578b8d767c-qndhw\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.466501 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:02 crc kubenswrapper[5031]: I0129 09:03:02.959276 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qndhw"] Jan 29 09:03:03 crc kubenswrapper[5031]: I0129 09:03:03.094742 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3af83c61-d4e1-4694-a820-1bb5529a2bce","Type":"ContainerStarted","Data":"eca0b954176d26dbb09f139618839761b0a1b8d3eb31ea30cc14bf0a3d71a80a"} Jan 29 09:03:03 crc kubenswrapper[5031]: I0129 09:03:03.108709 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" event={"ID":"6d23adb6-9455-44d0-a9d0-68bb335445d2","Type":"ContainerStarted","Data":"6d89d629b506afacfd47dbfbaf50cdfa0bf843a8195ee45778c4dfad84e76172"} Jan 29 09:03:04 crc kubenswrapper[5031]: I0129 09:03:04.118906 5031 generic.go:334] "Generic (PLEG): container finished" podID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerID="dd28cf646bc6a28a2c79331084a0a832f8c80193d504956f86b284dd6c8e5fe3" exitCode=0 Jan 29 09:03:04 crc kubenswrapper[5031]: I0129 09:03:04.119002 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" event={"ID":"6d23adb6-9455-44d0-a9d0-68bb335445d2","Type":"ContainerDied","Data":"dd28cf646bc6a28a2c79331084a0a832f8c80193d504956f86b284dd6c8e5fe3"} Jan 29 09:03:05 crc kubenswrapper[5031]: I0129 09:03:05.129777 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" event={"ID":"6d23adb6-9455-44d0-a9d0-68bb335445d2","Type":"ContainerStarted","Data":"253a5abae99c5c4fba1797f37418f0f208d4f71bad93d655befe6b73925c62d7"} Jan 29 09:03:05 crc kubenswrapper[5031]: I0129 09:03:05.130029 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:05 crc kubenswrapper[5031]: I0129 09:03:05.156783 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" podStartSLOduration=3.156762217 podStartE2EDuration="3.156762217s" podCreationTimestamp="2026-01-29 09:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:05.146963712 +0000 UTC m=+1465.646551664" watchObservedRunningTime="2026-01-29 09:03:05.156762217 +0000 UTC m=+1465.656350169" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.468692 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.529046 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-2g7mg"] Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.529313 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" podUID="326ac964-161b-4a55-9bc5-ba303d325d27" containerName="dnsmasq-dns" containerID="cri-o://0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416" gracePeriod=10 Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.681333 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-vgw8k"] Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.683617 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.703244 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-vgw8k"] Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.787359 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.787432 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.787459 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.787721 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.787867 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mz7m\" (UniqueName: \"kubernetes.io/projected/415da4d0-c38a-48ff-a0ed-8dccab506bca-kube-api-access-7mz7m\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.788014 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-config\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.894643 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.894705 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mz7m\" (UniqueName: \"kubernetes.io/projected/415da4d0-c38a-48ff-a0ed-8dccab506bca-kube-api-access-7mz7m\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.894754 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-config\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.894803 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.894819 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.894834 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.895696 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.895847 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.896439 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-config\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.896641 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.897276 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:12 crc kubenswrapper[5031]: I0129 09:03:12.924089 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mz7m\" (UniqueName: \"kubernetes.io/projected/415da4d0-c38a-48ff-a0ed-8dccab506bca-kube-api-access-7mz7m\") pod \"dnsmasq-dns-fbc59fbb7-vgw8k\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.020761 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.146902 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.216118 5031 generic.go:334] "Generic (PLEG): container finished" podID="326ac964-161b-4a55-9bc5-ba303d325d27" containerID="0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416" exitCode=0 Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.216160 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" event={"ID":"326ac964-161b-4a55-9bc5-ba303d325d27","Type":"ContainerDied","Data":"0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416"} Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.216191 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" event={"ID":"326ac964-161b-4a55-9bc5-ba303d325d27","Type":"ContainerDied","Data":"a7f0f556c8f121fb9ce2aeaee2af3d66d88f05a18ba855f69337bbce40a1c822"} Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.216209 5031 scope.go:117] "RemoveContainer" containerID="0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.216390 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-2g7mg" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.243491 5031 scope.go:117] "RemoveContainer" containerID="905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.267057 5031 scope.go:117] "RemoveContainer" containerID="0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416" Jan 29 09:03:13 crc kubenswrapper[5031]: E0129 09:03:13.267899 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416\": container with ID starting with 0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416 not found: ID does not exist" containerID="0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.267946 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416"} err="failed to get container status \"0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416\": rpc error: code = NotFound desc = could not find container \"0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416\": container with ID starting with 0b1e304cbe0aebdec39534e0e061e59378f636e00ece0f541b506e0cd9328416 not found: ID does not exist" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.267975 5031 scope.go:117] "RemoveContainer" containerID="905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2" Jan 29 09:03:13 crc kubenswrapper[5031]: E0129 09:03:13.268473 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2\": container with ID starting with 905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2 not found: ID does not exist" containerID="905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.268529 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2"} err="failed to get container status \"905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2\": rpc error: code = NotFound desc = could not find container \"905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2\": container with ID starting with 905d5644f0f3d1244f8924a4c217912416d1192de8eb1823f12c3f0ab768c9a2 not found: ID does not exist" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.303433 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-dns-svc\") pod \"326ac964-161b-4a55-9bc5-ba303d325d27\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.303523 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7frq\" (UniqueName: \"kubernetes.io/projected/326ac964-161b-4a55-9bc5-ba303d325d27-kube-api-access-p7frq\") pod \"326ac964-161b-4a55-9bc5-ba303d325d27\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.303626 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-config\") pod \"326ac964-161b-4a55-9bc5-ba303d325d27\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.303650 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-sb\") pod \"326ac964-161b-4a55-9bc5-ba303d325d27\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.303740 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-nb\") pod \"326ac964-161b-4a55-9bc5-ba303d325d27\" (UID: \"326ac964-161b-4a55-9bc5-ba303d325d27\") " Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.309556 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326ac964-161b-4a55-9bc5-ba303d325d27-kube-api-access-p7frq" (OuterVolumeSpecName: "kube-api-access-p7frq") pod "326ac964-161b-4a55-9bc5-ba303d325d27" (UID: "326ac964-161b-4a55-9bc5-ba303d325d27"). InnerVolumeSpecName "kube-api-access-p7frq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.359760 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "326ac964-161b-4a55-9bc5-ba303d325d27" (UID: "326ac964-161b-4a55-9bc5-ba303d325d27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.360534 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "326ac964-161b-4a55-9bc5-ba303d325d27" (UID: "326ac964-161b-4a55-9bc5-ba303d325d27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.363270 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "326ac964-161b-4a55-9bc5-ba303d325d27" (UID: "326ac964-161b-4a55-9bc5-ba303d325d27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.367659 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-config" (OuterVolumeSpecName: "config") pod "326ac964-161b-4a55-9bc5-ba303d325d27" (UID: "326ac964-161b-4a55-9bc5-ba303d325d27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.406636 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.406667 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7frq\" (UniqueName: \"kubernetes.io/projected/326ac964-161b-4a55-9bc5-ba303d325d27-kube-api-access-p7frq\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.406678 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.406689 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.406697 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326ac964-161b-4a55-9bc5-ba303d325d27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.507438 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-vgw8k"] Jan 29 09:03:13 crc kubenswrapper[5031]: W0129 09:03:13.514501 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod415da4d0_c38a_48ff_a0ed_8dccab506bca.slice/crio-59025fade142afdcca41acb4b90aa7b5153f916ca3b1120990327b7da926fc1d WatchSource:0}: Error finding container 59025fade142afdcca41acb4b90aa7b5153f916ca3b1120990327b7da926fc1d: Status 404 returned error can't find the container with id 59025fade142afdcca41acb4b90aa7b5153f916ca3b1120990327b7da926fc1d Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.559905 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-2g7mg"] Jan 29 09:03:13 crc kubenswrapper[5031]: I0129 09:03:13.569567 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-2g7mg"] Jan 29 09:03:14 crc kubenswrapper[5031]: I0129 09:03:14.234290 5031 generic.go:334] "Generic (PLEG): container finished" podID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerID="1cd72422f10bedd3e9139795b2d142915c6af961f6f37df39de766681a245c94" exitCode=0 Jan 29 09:03:14 crc kubenswrapper[5031]: I0129 09:03:14.234389 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" event={"ID":"415da4d0-c38a-48ff-a0ed-8dccab506bca","Type":"ContainerDied","Data":"1cd72422f10bedd3e9139795b2d142915c6af961f6f37df39de766681a245c94"} Jan 29 09:03:14 crc kubenswrapper[5031]: I0129 09:03:14.234785 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" event={"ID":"415da4d0-c38a-48ff-a0ed-8dccab506bca","Type":"ContainerStarted","Data":"59025fade142afdcca41acb4b90aa7b5153f916ca3b1120990327b7da926fc1d"} Jan 29 09:03:14 crc kubenswrapper[5031]: I0129 09:03:14.297337 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326ac964-161b-4a55-9bc5-ba303d325d27" path="/var/lib/kubelet/pods/326ac964-161b-4a55-9bc5-ba303d325d27/volumes" Jan 29 09:03:15 crc kubenswrapper[5031]: I0129 09:03:15.249026 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" event={"ID":"415da4d0-c38a-48ff-a0ed-8dccab506bca","Type":"ContainerStarted","Data":"541d39aaab80762a1903bd6d6d3ba809648d9bfec33ccc5156b026a0496091e5"} Jan 29 09:03:15 crc kubenswrapper[5031]: I0129 09:03:15.249424 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:15 crc kubenswrapper[5031]: I0129 09:03:15.269730 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" podStartSLOduration=3.269713237 podStartE2EDuration="3.269713237s" podCreationTimestamp="2026-01-29 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:15.268429282 +0000 UTC m=+1475.768017234" watchObservedRunningTime="2026-01-29 09:03:15.269713237 +0000 UTC m=+1475.769301189" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.023552 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.093196 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qndhw"] Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.093443 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" podUID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerName="dnsmasq-dns" containerID="cri-o://253a5abae99c5c4fba1797f37418f0f208d4f71bad93d655befe6b73925c62d7" gracePeriod=10 Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.379792 5031 generic.go:334] "Generic (PLEG): container finished" podID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerID="253a5abae99c5c4fba1797f37418f0f208d4f71bad93d655befe6b73925c62d7" exitCode=0 Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.380828 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" event={"ID":"6d23adb6-9455-44d0-a9d0-68bb335445d2","Type":"ContainerDied","Data":"253a5abae99c5c4fba1797f37418f0f208d4f71bad93d655befe6b73925c62d7"} Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.752811 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.840304 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59bpc\" (UniqueName: \"kubernetes.io/projected/6d23adb6-9455-44d0-a9d0-68bb335445d2-kube-api-access-59bpc\") pod \"6d23adb6-9455-44d0-a9d0-68bb335445d2\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.840382 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-config\") pod \"6d23adb6-9455-44d0-a9d0-68bb335445d2\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.840461 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-nb\") pod \"6d23adb6-9455-44d0-a9d0-68bb335445d2\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.840677 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-dns-svc\") pod \"6d23adb6-9455-44d0-a9d0-68bb335445d2\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.840731 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-openstack-edpm-ipam\") pod \"6d23adb6-9455-44d0-a9d0-68bb335445d2\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.840776 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-sb\") pod \"6d23adb6-9455-44d0-a9d0-68bb335445d2\" (UID: \"6d23adb6-9455-44d0-a9d0-68bb335445d2\") " Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.857082 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d23adb6-9455-44d0-a9d0-68bb335445d2-kube-api-access-59bpc" (OuterVolumeSpecName: "kube-api-access-59bpc") pod "6d23adb6-9455-44d0-a9d0-68bb335445d2" (UID: "6d23adb6-9455-44d0-a9d0-68bb335445d2"). InnerVolumeSpecName "kube-api-access-59bpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.903389 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d23adb6-9455-44d0-a9d0-68bb335445d2" (UID: "6d23adb6-9455-44d0-a9d0-68bb335445d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.907475 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-config" (OuterVolumeSpecName: "config") pod "6d23adb6-9455-44d0-a9d0-68bb335445d2" (UID: "6d23adb6-9455-44d0-a9d0-68bb335445d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.914742 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d23adb6-9455-44d0-a9d0-68bb335445d2" (UID: "6d23adb6-9455-44d0-a9d0-68bb335445d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.927249 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "6d23adb6-9455-44d0-a9d0-68bb335445d2" (UID: "6d23adb6-9455-44d0-a9d0-68bb335445d2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.927784 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d23adb6-9455-44d0-a9d0-68bb335445d2" (UID: "6d23adb6-9455-44d0-a9d0-68bb335445d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.943061 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.943103 5031 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.943121 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.943134 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59bpc\" (UniqueName: \"kubernetes.io/projected/6d23adb6-9455-44d0-a9d0-68bb335445d2-kube-api-access-59bpc\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.943146 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:23 crc kubenswrapper[5031]: I0129 09:03:23.943159 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d23adb6-9455-44d0-a9d0-68bb335445d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:24 crc kubenswrapper[5031]: I0129 09:03:24.390626 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" event={"ID":"6d23adb6-9455-44d0-a9d0-68bb335445d2","Type":"ContainerDied","Data":"6d89d629b506afacfd47dbfbaf50cdfa0bf843a8195ee45778c4dfad84e76172"} Jan 29 09:03:24 crc kubenswrapper[5031]: I0129 09:03:24.390706 5031 scope.go:117] "RemoveContainer" containerID="253a5abae99c5c4fba1797f37418f0f208d4f71bad93d655befe6b73925c62d7" Jan 29 09:03:24 crc kubenswrapper[5031]: I0129 09:03:24.390704 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qndhw" Jan 29 09:03:24 crc kubenswrapper[5031]: I0129 09:03:24.417323 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qndhw"] Jan 29 09:03:24 crc kubenswrapper[5031]: I0129 09:03:24.417853 5031 scope.go:117] "RemoveContainer" containerID="dd28cf646bc6a28a2c79331084a0a832f8c80193d504956f86b284dd6c8e5fe3" Jan 29 09:03:24 crc kubenswrapper[5031]: I0129 09:03:24.426539 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qndhw"] Jan 29 09:03:26 crc kubenswrapper[5031]: I0129 09:03:26.292473 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d23adb6-9455-44d0-a9d0-68bb335445d2" path="/var/lib/kubelet/pods/6d23adb6-9455-44d0-a9d0-68bb335445d2/volumes" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.738123 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd"] Jan 29 09:03:33 crc kubenswrapper[5031]: E0129 09:03:33.739232 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326ac964-161b-4a55-9bc5-ba303d325d27" containerName="dnsmasq-dns" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.739249 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="326ac964-161b-4a55-9bc5-ba303d325d27" containerName="dnsmasq-dns" Jan 29 09:03:33 crc kubenswrapper[5031]: E0129 09:03:33.739275 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerName="init" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.739283 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerName="init" Jan 29 09:03:33 crc kubenswrapper[5031]: E0129 09:03:33.739294 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerName="dnsmasq-dns" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.739316 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerName="dnsmasq-dns" Jan 29 09:03:33 crc kubenswrapper[5031]: E0129 09:03:33.739335 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326ac964-161b-4a55-9bc5-ba303d325d27" containerName="init" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.739342 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="326ac964-161b-4a55-9bc5-ba303d325d27" containerName="init" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.739571 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="326ac964-161b-4a55-9bc5-ba303d325d27" containerName="dnsmasq-dns" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.739590 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d23adb6-9455-44d0-a9d0-68bb335445d2" containerName="dnsmasq-dns" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.740445 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.745022 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.745358 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.745580 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.750752 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.781889 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd"] Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.837355 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.837432 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.837559 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rzqw\" (UniqueName: \"kubernetes.io/projected/04bc9814-a834-48e6-9096-c233ccd1d5e0-kube-api-access-9rzqw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.837608 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.939738 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.939790 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.940163 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.940606 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rzqw\" (UniqueName: \"kubernetes.io/projected/04bc9814-a834-48e6-9096-c233ccd1d5e0-kube-api-access-9rzqw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.945863 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.945880 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.946925 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:33 crc kubenswrapper[5031]: I0129 09:03:33.956990 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rzqw\" (UniqueName: \"kubernetes.io/projected/04bc9814-a834-48e6-9096-c233ccd1d5e0-kube-api-access-9rzqw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:34 crc kubenswrapper[5031]: I0129 09:03:34.063470 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:03:34 crc kubenswrapper[5031]: I0129 09:03:34.474683 5031 generic.go:334] "Generic (PLEG): container finished" podID="6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73" containerID="0053ffb671a60a0380002bd39dfb22743eb066e57b4c84e7485fd9280f4aeb77" exitCode=0 Jan 29 09:03:34 crc kubenswrapper[5031]: I0129 09:03:34.474745 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73","Type":"ContainerDied","Data":"0053ffb671a60a0380002bd39dfb22743eb066e57b4c84e7485fd9280f4aeb77"} Jan 29 09:03:34 crc kubenswrapper[5031]: I0129 09:03:34.764413 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd"] Jan 29 09:03:34 crc kubenswrapper[5031]: W0129 09:03:34.767765 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04bc9814_a834_48e6_9096_c233ccd1d5e0.slice/crio-c3202abe52b51010997a9121a26a96fc96fef984628c2db893ae88aceb76b768 WatchSource:0}: Error finding container c3202abe52b51010997a9121a26a96fc96fef984628c2db893ae88aceb76b768: Status 404 returned error can't find the container with id c3202abe52b51010997a9121a26a96fc96fef984628c2db893ae88aceb76b768 Jan 29 09:03:35 crc kubenswrapper[5031]: I0129 09:03:35.492426 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73","Type":"ContainerStarted","Data":"6f2e065b3662ecfade2dd915e96b6591d382d34a2008e80bcefe02c4548eaea6"} Jan 29 09:03:35 crc kubenswrapper[5031]: I0129 09:03:35.492941 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 09:03:35 crc kubenswrapper[5031]: I0129 09:03:35.493945 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" event={"ID":"04bc9814-a834-48e6-9096-c233ccd1d5e0","Type":"ContainerStarted","Data":"c3202abe52b51010997a9121a26a96fc96fef984628c2db893ae88aceb76b768"} Jan 29 09:03:35 crc kubenswrapper[5031]: I0129 09:03:35.496831 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3af83c61-d4e1-4694-a820-1bb5529a2bce","Type":"ContainerDied","Data":"eca0b954176d26dbb09f139618839761b0a1b8d3eb31ea30cc14bf0a3d71a80a"} Jan 29 09:03:35 crc kubenswrapper[5031]: I0129 09:03:35.496810 5031 generic.go:334] "Generic (PLEG): container finished" podID="3af83c61-d4e1-4694-a820-1bb5529a2bce" containerID="eca0b954176d26dbb09f139618839761b0a1b8d3eb31ea30cc14bf0a3d71a80a" exitCode=0 Jan 29 09:03:35 crc kubenswrapper[5031]: I0129 09:03:35.537512 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.537486648 podStartE2EDuration="36.537486648s" podCreationTimestamp="2026-01-29 09:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:35.525831917 +0000 UTC m=+1496.025419889" watchObservedRunningTime="2026-01-29 09:03:35.537486648 +0000 UTC m=+1496.037074600" Jan 29 09:03:36 crc kubenswrapper[5031]: I0129 09:03:36.510495 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3af83c61-d4e1-4694-a820-1bb5529a2bce","Type":"ContainerStarted","Data":"5e1faac942ecbb3a162aa205b2ec4992535e6b1e6439d7344a8c6ba34d59f411"} Jan 29 09:03:36 crc kubenswrapper[5031]: I0129 09:03:36.511348 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:36 crc kubenswrapper[5031]: I0129 09:03:36.546063 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.546042014 podStartE2EDuration="36.546042014s" podCreationTimestamp="2026-01-29 09:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:36.538716589 +0000 UTC m=+1497.038304561" watchObservedRunningTime="2026-01-29 09:03:36.546042014 +0000 UTC m=+1497.045629966" Jan 29 09:03:38 crc kubenswrapper[5031]: I0129 09:03:38.493193 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:03:38 crc kubenswrapper[5031]: I0129 09:03:38.493552 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:03:47 crc kubenswrapper[5031]: I0129 09:03:47.166745 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:03:47 crc kubenswrapper[5031]: I0129 09:03:47.644188 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" event={"ID":"04bc9814-a834-48e6-9096-c233ccd1d5e0","Type":"ContainerStarted","Data":"1267958ea49b6af110c56a3a00b046ee49d81176aa0d3b6f1891e7e5ad11f881"} Jan 29 09:03:47 crc kubenswrapper[5031]: I0129 09:03:47.669979 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" podStartSLOduration=2.2772653370000002 podStartE2EDuration="14.66995614s" podCreationTimestamp="2026-01-29 09:03:33 +0000 UTC" firstStartedPulling="2026-01-29 09:03:34.770269927 +0000 UTC m=+1495.269857879" lastFinishedPulling="2026-01-29 09:03:47.16296073 +0000 UTC m=+1507.662548682" observedRunningTime="2026-01-29 09:03:47.660582631 +0000 UTC m=+1508.160170593" watchObservedRunningTime="2026-01-29 09:03:47.66995614 +0000 UTC m=+1508.169544082" Jan 29 09:03:49 crc kubenswrapper[5031]: I0129 09:03:49.646578 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 09:03:50 crc kubenswrapper[5031]: I0129 09:03:50.536604 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:03:59 crc kubenswrapper[5031]: I0129 09:03:59.765021 5031 generic.go:334] "Generic (PLEG): container finished" podID="04bc9814-a834-48e6-9096-c233ccd1d5e0" containerID="1267958ea49b6af110c56a3a00b046ee49d81176aa0d3b6f1891e7e5ad11f881" exitCode=0 Jan 29 09:03:59 crc kubenswrapper[5031]: I0129 09:03:59.765122 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" event={"ID":"04bc9814-a834-48e6-9096-c233ccd1d5e0","Type":"ContainerDied","Data":"1267958ea49b6af110c56a3a00b046ee49d81176aa0d3b6f1891e7e5ad11f881"} Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.287650 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.475226 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-ssh-key-openstack-edpm-ipam\") pod \"04bc9814-a834-48e6-9096-c233ccd1d5e0\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.475684 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rzqw\" (UniqueName: \"kubernetes.io/projected/04bc9814-a834-48e6-9096-c233ccd1d5e0-kube-api-access-9rzqw\") pod \"04bc9814-a834-48e6-9096-c233ccd1d5e0\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.475762 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-repo-setup-combined-ca-bundle\") pod \"04bc9814-a834-48e6-9096-c233ccd1d5e0\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.475802 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-inventory\") pod \"04bc9814-a834-48e6-9096-c233ccd1d5e0\" (UID: \"04bc9814-a834-48e6-9096-c233ccd1d5e0\") " Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.481560 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bc9814-a834-48e6-9096-c233ccd1d5e0-kube-api-access-9rzqw" (OuterVolumeSpecName: "kube-api-access-9rzqw") pod "04bc9814-a834-48e6-9096-c233ccd1d5e0" (UID: "04bc9814-a834-48e6-9096-c233ccd1d5e0"). InnerVolumeSpecName "kube-api-access-9rzqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.482551 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "04bc9814-a834-48e6-9096-c233ccd1d5e0" (UID: "04bc9814-a834-48e6-9096-c233ccd1d5e0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.504032 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-inventory" (OuterVolumeSpecName: "inventory") pod "04bc9814-a834-48e6-9096-c233ccd1d5e0" (UID: "04bc9814-a834-48e6-9096-c233ccd1d5e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.505713 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "04bc9814-a834-48e6-9096-c233ccd1d5e0" (UID: "04bc9814-a834-48e6-9096-c233ccd1d5e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.577448 5031 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.577826 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.577955 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04bc9814-a834-48e6-9096-c233ccd1d5e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.578041 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rzqw\" (UniqueName: \"kubernetes.io/projected/04bc9814-a834-48e6-9096-c233ccd1d5e0-kube-api-access-9rzqw\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.786015 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" event={"ID":"04bc9814-a834-48e6-9096-c233ccd1d5e0","Type":"ContainerDied","Data":"c3202abe52b51010997a9121a26a96fc96fef984628c2db893ae88aceb76b768"} Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.786065 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3202abe52b51010997a9121a26a96fc96fef984628c2db893ae88aceb76b768" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.786103 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.853062 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57"] Jan 29 09:04:01 crc kubenswrapper[5031]: E0129 09:04:01.853457 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bc9814-a834-48e6-9096-c233ccd1d5e0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.853469 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bc9814-a834-48e6-9096-c233ccd1d5e0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.853632 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bc9814-a834-48e6-9096-c233ccd1d5e0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.854217 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.857277 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.857765 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.862182 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.862632 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.866191 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57"] Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.884288 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.884392 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.884448 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.884495 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbjd\" (UniqueName: \"kubernetes.io/projected/7e3c382e-3da7-4a2f-8227-e2986b1c28df-kube-api-access-jzbjd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.986219 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.986575 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzbjd\" (UniqueName: \"kubernetes.io/projected/7e3c382e-3da7-4a2f-8227-e2986b1c28df-kube-api-access-jzbjd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.986675 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.986835 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.991424 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.992666 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:01 crc kubenswrapper[5031]: I0129 09:04:01.994265 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:02 crc kubenswrapper[5031]: I0129 09:04:02.005337 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzbjd\" (UniqueName: \"kubernetes.io/projected/7e3c382e-3da7-4a2f-8227-e2986b1c28df-kube-api-access-jzbjd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:02 crc kubenswrapper[5031]: I0129 09:04:02.170030 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:04:02 crc kubenswrapper[5031]: I0129 09:04:02.675815 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57"] Jan 29 09:04:02 crc kubenswrapper[5031]: I0129 09:04:02.794826 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" event={"ID":"7e3c382e-3da7-4a2f-8227-e2986b1c28df","Type":"ContainerStarted","Data":"9a7c8001f1eead929e9020e989f4f3f6267a45a884b1b72600c36bc1fe6d69de"} Jan 29 09:04:03 crc kubenswrapper[5031]: I0129 09:04:03.805761 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" event={"ID":"7e3c382e-3da7-4a2f-8227-e2986b1c28df","Type":"ContainerStarted","Data":"635763c6313da26b6259243e30bb5998eeab71b9dcef1435c8bced51628b5bfe"} Jan 29 09:04:03 crc kubenswrapper[5031]: I0129 09:04:03.829115 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" podStartSLOduration=2.362290429 podStartE2EDuration="2.829097076s" podCreationTimestamp="2026-01-29 09:04:01 +0000 UTC" firstStartedPulling="2026-01-29 09:04:02.681956999 +0000 UTC m=+1523.181544951" lastFinishedPulling="2026-01-29 09:04:03.148763646 +0000 UTC m=+1523.648351598" observedRunningTime="2026-01-29 09:04:03.820469776 +0000 UTC m=+1524.320057748" watchObservedRunningTime="2026-01-29 09:04:03.829097076 +0000 UTC m=+1524.328685028" Jan 29 09:04:05 crc kubenswrapper[5031]: I0129 09:04:05.803446 5031 scope.go:117] "RemoveContainer" containerID="104d9a34b5c68baed1e2bf10c3a91ab52c89d2ccce0e11a7258ba174d5aba08a" Jan 29 09:04:05 crc kubenswrapper[5031]: I0129 09:04:05.834391 5031 scope.go:117] "RemoveContainer" containerID="d308cbaf1d8f06db09add169a2872364927af335501f931edf11fcafcddf42c0" Jan 29 09:04:08 crc kubenswrapper[5031]: I0129 09:04:08.493298 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:04:08 crc kubenswrapper[5031]: I0129 09:04:08.493674 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:04:28 crc kubenswrapper[5031]: I0129 09:04:28.718525 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zntf6"] Jan 29 09:04:28 crc kubenswrapper[5031]: I0129 09:04:28.721342 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:28 crc kubenswrapper[5031]: I0129 09:04:28.746521 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zntf6"] Jan 29 09:04:28 crc kubenswrapper[5031]: I0129 09:04:28.910544 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqpb\" (UniqueName: \"kubernetes.io/projected/9df88f00-4028-4e81-9be0-e7bd43ff28f1-kube-api-access-mrqpb\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:28 crc kubenswrapper[5031]: I0129 09:04:28.910608 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-catalog-content\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:28 crc kubenswrapper[5031]: I0129 09:04:28.910948 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-utilities\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.012879 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-utilities\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.013075 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqpb\" (UniqueName: \"kubernetes.io/projected/9df88f00-4028-4e81-9be0-e7bd43ff28f1-kube-api-access-mrqpb\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.013098 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-catalog-content\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.013549 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-utilities\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.013598 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-catalog-content\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.051221 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqpb\" (UniqueName: \"kubernetes.io/projected/9df88f00-4028-4e81-9be0-e7bd43ff28f1-kube-api-access-mrqpb\") pod \"community-operators-zntf6\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.344651 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:29 crc kubenswrapper[5031]: I0129 09:04:29.833824 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zntf6"] Jan 29 09:04:30 crc kubenswrapper[5031]: I0129 09:04:30.045209 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerStarted","Data":"426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb"} Jan 29 09:04:30 crc kubenswrapper[5031]: I0129 09:04:30.045535 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerStarted","Data":"33c3e0d580691dc66a630e7193ab3d9d5605536f2b9c8f66ffcb18d771300d41"} Jan 29 09:04:31 crc kubenswrapper[5031]: I0129 09:04:31.057134 5031 generic.go:334] "Generic (PLEG): container finished" podID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerID="426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb" exitCode=0 Jan 29 09:04:31 crc kubenswrapper[5031]: I0129 09:04:31.057203 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerDied","Data":"426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb"} Jan 29 09:04:32 crc kubenswrapper[5031]: I0129 09:04:32.073382 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerStarted","Data":"504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae"} Jan 29 09:04:33 crc kubenswrapper[5031]: I0129 09:04:33.098285 5031 generic.go:334] "Generic (PLEG): container finished" podID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerID="504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae" exitCode=0 Jan 29 09:04:33 crc kubenswrapper[5031]: I0129 09:04:33.098394 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerDied","Data":"504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae"} Jan 29 09:04:35 crc kubenswrapper[5031]: I0129 09:04:35.118718 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerStarted","Data":"9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc"} Jan 29 09:04:35 crc kubenswrapper[5031]: I0129 09:04:35.146112 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zntf6" podStartSLOduration=3.682105211 podStartE2EDuration="7.146090877s" podCreationTimestamp="2026-01-29 09:04:28 +0000 UTC" firstStartedPulling="2026-01-29 09:04:31.059292068 +0000 UTC m=+1551.558880020" lastFinishedPulling="2026-01-29 09:04:34.523277734 +0000 UTC m=+1555.022865686" observedRunningTime="2026-01-29 09:04:35.14283525 +0000 UTC m=+1555.642423212" watchObservedRunningTime="2026-01-29 09:04:35.146090877 +0000 UTC m=+1555.645678829" Jan 29 09:04:38 crc kubenswrapper[5031]: I0129 09:04:38.493553 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:04:38 crc kubenswrapper[5031]: I0129 09:04:38.493924 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:04:38 crc kubenswrapper[5031]: I0129 09:04:38.493975 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:04:38 crc kubenswrapper[5031]: I0129 09:04:38.495022 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:04:38 crc kubenswrapper[5031]: I0129 09:04:38.495081 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" gracePeriod=600 Jan 29 09:04:38 crc kubenswrapper[5031]: E0129 09:04:38.617293 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:04:39 crc kubenswrapper[5031]: I0129 09:04:39.169295 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" exitCode=0 Jan 29 09:04:39 crc kubenswrapper[5031]: I0129 09:04:39.170014 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe"} Jan 29 09:04:39 crc kubenswrapper[5031]: I0129 09:04:39.170162 5031 scope.go:117] "RemoveContainer" containerID="968b7ae674e15f331a40354ae3280aca1a2d384b002cb22e9f641c2b3f0a41ed" Jan 29 09:04:39 crc kubenswrapper[5031]: I0129 09:04:39.171523 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:04:39 crc kubenswrapper[5031]: E0129 09:04:39.172749 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:04:39 crc kubenswrapper[5031]: I0129 09:04:39.345538 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:39 crc kubenswrapper[5031]: I0129 09:04:39.345609 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:39 crc kubenswrapper[5031]: I0129 09:04:39.390996 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:40 crc kubenswrapper[5031]: I0129 09:04:40.226488 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:40 crc kubenswrapper[5031]: I0129 09:04:40.305190 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zntf6"] Jan 29 09:04:42 crc kubenswrapper[5031]: I0129 09:04:42.213941 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zntf6" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="registry-server" containerID="cri-o://9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc" gracePeriod=2 Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.177043 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.225210 5031 generic.go:334] "Generic (PLEG): container finished" podID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerID="9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc" exitCode=0 Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.225266 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zntf6" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.225265 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerDied","Data":"9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc"} Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.226146 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zntf6" event={"ID":"9df88f00-4028-4e81-9be0-e7bd43ff28f1","Type":"ContainerDied","Data":"33c3e0d580691dc66a630e7193ab3d9d5605536f2b9c8f66ffcb18d771300d41"} Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.226225 5031 scope.go:117] "RemoveContainer" containerID="9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.250669 5031 scope.go:117] "RemoveContainer" containerID="504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.273770 5031 scope.go:117] "RemoveContainer" containerID="426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.292551 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-utilities\") pod \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.292611 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrqpb\" (UniqueName: \"kubernetes.io/projected/9df88f00-4028-4e81-9be0-e7bd43ff28f1-kube-api-access-mrqpb\") pod \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.292719 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-catalog-content\") pod \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\" (UID: \"9df88f00-4028-4e81-9be0-e7bd43ff28f1\") " Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.294267 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-utilities" (OuterVolumeSpecName: "utilities") pod "9df88f00-4028-4e81-9be0-e7bd43ff28f1" (UID: "9df88f00-4028-4e81-9be0-e7bd43ff28f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.299954 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df88f00-4028-4e81-9be0-e7bd43ff28f1-kube-api-access-mrqpb" (OuterVolumeSpecName: "kube-api-access-mrqpb") pod "9df88f00-4028-4e81-9be0-e7bd43ff28f1" (UID: "9df88f00-4028-4e81-9be0-e7bd43ff28f1"). InnerVolumeSpecName "kube-api-access-mrqpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.354113 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9df88f00-4028-4e81-9be0-e7bd43ff28f1" (UID: "9df88f00-4028-4e81-9be0-e7bd43ff28f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.358781 5031 scope.go:117] "RemoveContainer" containerID="9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc" Jan 29 09:04:43 crc kubenswrapper[5031]: E0129 09:04:43.359235 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc\": container with ID starting with 9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc not found: ID does not exist" containerID="9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.359286 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc"} err="failed to get container status \"9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc\": rpc error: code = NotFound desc = could not find container \"9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc\": container with ID starting with 9e68c427e4060eacd245cce007788e4c8b16a0e74c2a50e46b04fa39959eb4bc not found: ID does not exist" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.359317 5031 scope.go:117] "RemoveContainer" containerID="504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae" Jan 29 09:04:43 crc kubenswrapper[5031]: E0129 09:04:43.359879 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae\": container with ID starting with 504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae not found: ID does not exist" containerID="504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.359914 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae"} err="failed to get container status \"504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae\": rpc error: code = NotFound desc = could not find container \"504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae\": container with ID starting with 504ffa6b999fcfdcbbe17f9c63e478fba079d4dd3366aab1bbac7d70e9ddc8ae not found: ID does not exist" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.359934 5031 scope.go:117] "RemoveContainer" containerID="426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb" Jan 29 09:04:43 crc kubenswrapper[5031]: E0129 09:04:43.360296 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb\": container with ID starting with 426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb not found: ID does not exist" containerID="426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.360359 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb"} err="failed to get container status \"426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb\": rpc error: code = NotFound desc = could not find container \"426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb\": container with ID starting with 426672138a3cfec899f0cf7ced638ff2445125eb7c3666d9dc6edcb92545a1eb not found: ID does not exist" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.395267 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.396124 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrqpb\" (UniqueName: \"kubernetes.io/projected/9df88f00-4028-4e81-9be0-e7bd43ff28f1-kube-api-access-mrqpb\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.396169 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df88f00-4028-4e81-9be0-e7bd43ff28f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.558329 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zntf6"] Jan 29 09:04:43 crc kubenswrapper[5031]: I0129 09:04:43.570477 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zntf6"] Jan 29 09:04:44 crc kubenswrapper[5031]: I0129 09:04:44.297472 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" path="/var/lib/kubelet/pods/9df88f00-4028-4e81-9be0-e7bd43ff28f1/volumes" Jan 29 09:04:52 crc kubenswrapper[5031]: I0129 09:04:52.282900 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:04:52 crc kubenswrapper[5031]: E0129 09:04:52.283706 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:05:03 crc kubenswrapper[5031]: I0129 09:05:03.282488 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:05:03 crc kubenswrapper[5031]: E0129 09:05:03.283510 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:05:05 crc kubenswrapper[5031]: I0129 09:05:05.958537 5031 scope.go:117] "RemoveContainer" containerID="7e19c038c83184df7760b1682391971e92f4c14e32928e169beba330b490c7d2" Jan 29 09:05:06 crc kubenswrapper[5031]: I0129 09:05:06.005047 5031 scope.go:117] "RemoveContainer" containerID="80604cfe1e2c531a86bec2175bc5f49c52d4518f6371c416470cd0abb4d2a830" Jan 29 09:05:06 crc kubenswrapper[5031]: I0129 09:05:06.039354 5031 scope.go:117] "RemoveContainer" containerID="5e9ada4ea25ed430b28581f16d3a62a58f7caabea8e30601e8d306ccfe9c6c5b" Jan 29 09:05:06 crc kubenswrapper[5031]: I0129 09:05:06.082642 5031 scope.go:117] "RemoveContainer" containerID="1e7b40902b272dbf69bd78c3a0692143594320eed9dd6b309b99d45b6068a6aa" Jan 29 09:05:15 crc kubenswrapper[5031]: I0129 09:05:15.283195 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:05:15 crc kubenswrapper[5031]: E0129 09:05:15.284428 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:05:28 crc kubenswrapper[5031]: I0129 09:05:28.282515 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:05:28 crc kubenswrapper[5031]: E0129 09:05:28.283337 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:05:39 crc kubenswrapper[5031]: I0129 09:05:39.283106 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:05:39 crc kubenswrapper[5031]: E0129 09:05:39.283977 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:05:54 crc kubenswrapper[5031]: I0129 09:05:54.283957 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:05:54 crc kubenswrapper[5031]: E0129 09:05:54.284722 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.509987 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cvslk"] Jan 29 09:05:58 crc kubenswrapper[5031]: E0129 09:05:58.510994 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="extract-utilities" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.511008 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="extract-utilities" Jan 29 09:05:58 crc kubenswrapper[5031]: E0129 09:05:58.512411 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="extract-content" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.512428 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="extract-content" Jan 29 09:05:58 crc kubenswrapper[5031]: E0129 09:05:58.512483 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="registry-server" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.512493 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="registry-server" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.512948 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df88f00-4028-4e81-9be0-e7bd43ff28f1" containerName="registry-server" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.517135 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.535358 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-utilities\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.535955 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtdzp\" (UniqueName: \"kubernetes.io/projected/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-kube-api-access-vtdzp\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.536035 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-catalog-content\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.538210 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvslk"] Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.637930 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-utilities\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.638003 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtdzp\" (UniqueName: \"kubernetes.io/projected/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-kube-api-access-vtdzp\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.638023 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-catalog-content\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.638466 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-utilities\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.638573 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-catalog-content\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.667178 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtdzp\" (UniqueName: \"kubernetes.io/projected/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-kube-api-access-vtdzp\") pod \"certified-operators-cvslk\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:58 crc kubenswrapper[5031]: I0129 09:05:58.858038 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:05:59 crc kubenswrapper[5031]: I0129 09:05:59.390613 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvslk"] Jan 29 09:06:00 crc kubenswrapper[5031]: I0129 09:06:00.044626 5031 generic.go:334] "Generic (PLEG): container finished" podID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerID="1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53" exitCode=0 Jan 29 09:06:00 crc kubenswrapper[5031]: I0129 09:06:00.044681 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvslk" event={"ID":"c84ccdd0-53c4-4f20-ae9b-c5376dce245b","Type":"ContainerDied","Data":"1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53"} Jan 29 09:06:00 crc kubenswrapper[5031]: I0129 09:06:00.044713 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvslk" event={"ID":"c84ccdd0-53c4-4f20-ae9b-c5376dce245b","Type":"ContainerStarted","Data":"bd3f328f6aed6d9dcbf5aff2dd46fa0bf784baee48b740b68ecc4da9d67242bb"} Jan 29 09:06:01 crc kubenswrapper[5031]: I0129 09:06:01.055469 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvslk" event={"ID":"c84ccdd0-53c4-4f20-ae9b-c5376dce245b","Type":"ContainerStarted","Data":"06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba"} Jan 29 09:06:02 crc kubenswrapper[5031]: I0129 09:06:02.068858 5031 generic.go:334] "Generic (PLEG): container finished" podID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerID="06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba" exitCode=0 Jan 29 09:06:02 crc kubenswrapper[5031]: I0129 09:06:02.068907 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvslk" event={"ID":"c84ccdd0-53c4-4f20-ae9b-c5376dce245b","Type":"ContainerDied","Data":"06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba"} Jan 29 09:06:03 crc kubenswrapper[5031]: I0129 09:06:03.079353 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvslk" event={"ID":"c84ccdd0-53c4-4f20-ae9b-c5376dce245b","Type":"ContainerStarted","Data":"cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665"} Jan 29 09:06:03 crc kubenswrapper[5031]: I0129 09:06:03.103017 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cvslk" podStartSLOduration=2.564679102 podStartE2EDuration="5.102996074s" podCreationTimestamp="2026-01-29 09:05:58 +0000 UTC" firstStartedPulling="2026-01-29 09:06:00.047629945 +0000 UTC m=+1640.547217887" lastFinishedPulling="2026-01-29 09:06:02.585946897 +0000 UTC m=+1643.085534859" observedRunningTime="2026-01-29 09:06:03.09796784 +0000 UTC m=+1643.597555822" watchObservedRunningTime="2026-01-29 09:06:03.102996074 +0000 UTC m=+1643.602584026" Jan 29 09:06:05 crc kubenswrapper[5031]: I0129 09:06:05.282568 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:06:05 crc kubenswrapper[5031]: E0129 09:06:05.283195 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:06:06 crc kubenswrapper[5031]: I0129 09:06:06.234278 5031 scope.go:117] "RemoveContainer" containerID="2870feb6f68ec7fd46746b113bb8d2857881d1c5348371a8408a371d7445dc42" Jan 29 09:06:06 crc kubenswrapper[5031]: I0129 09:06:06.257871 5031 scope.go:117] "RemoveContainer" containerID="9887758ff5d01c7a37bbae159f96c09381d7fef5fa405cabf927f23ebeb86ccb" Jan 29 09:06:06 crc kubenswrapper[5031]: I0129 09:06:06.290392 5031 scope.go:117] "RemoveContainer" containerID="4dbfe1c48587b57a3581dae11ed7e422649b9577ce4f55d55a47f100e0a83855" Jan 29 09:06:06 crc kubenswrapper[5031]: I0129 09:06:06.308269 5031 scope.go:117] "RemoveContainer" containerID="14d76079584a5062e530f31b119d8ff265ab554fc478242705e5abba2fec2a30" Jan 29 09:06:08 crc kubenswrapper[5031]: I0129 09:06:08.859105 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:06:08 crc kubenswrapper[5031]: I0129 09:06:08.860835 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:06:08 crc kubenswrapper[5031]: I0129 09:06:08.911541 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:06:09 crc kubenswrapper[5031]: I0129 09:06:09.171881 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:06:09 crc kubenswrapper[5031]: I0129 09:06:09.229927 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvslk"] Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.144664 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cvslk" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="registry-server" containerID="cri-o://cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665" gracePeriod=2 Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.604499 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.690909 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-utilities\") pod \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.690996 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtdzp\" (UniqueName: \"kubernetes.io/projected/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-kube-api-access-vtdzp\") pod \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.691200 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-catalog-content\") pod \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\" (UID: \"c84ccdd0-53c4-4f20-ae9b-c5376dce245b\") " Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.693271 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-utilities" (OuterVolumeSpecName: "utilities") pod "c84ccdd0-53c4-4f20-ae9b-c5376dce245b" (UID: "c84ccdd0-53c4-4f20-ae9b-c5376dce245b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.698472 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-kube-api-access-vtdzp" (OuterVolumeSpecName: "kube-api-access-vtdzp") pod "c84ccdd0-53c4-4f20-ae9b-c5376dce245b" (UID: "c84ccdd0-53c4-4f20-ae9b-c5376dce245b"). InnerVolumeSpecName "kube-api-access-vtdzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.750584 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c84ccdd0-53c4-4f20-ae9b-c5376dce245b" (UID: "c84ccdd0-53c4-4f20-ae9b-c5376dce245b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.793190 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtdzp\" (UniqueName: \"kubernetes.io/projected/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-kube-api-access-vtdzp\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.793230 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:11 crc kubenswrapper[5031]: I0129 09:06:11.793243 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ccdd0-53c4-4f20-ae9b-c5376dce245b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.154249 5031 generic.go:334] "Generic (PLEG): container finished" podID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerID="cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665" exitCode=0 Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.154297 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvslk" event={"ID":"c84ccdd0-53c4-4f20-ae9b-c5376dce245b","Type":"ContainerDied","Data":"cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665"} Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.154304 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvslk" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.154323 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvslk" event={"ID":"c84ccdd0-53c4-4f20-ae9b-c5376dce245b","Type":"ContainerDied","Data":"bd3f328f6aed6d9dcbf5aff2dd46fa0bf784baee48b740b68ecc4da9d67242bb"} Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.154339 5031 scope.go:117] "RemoveContainer" containerID="cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.181440 5031 scope.go:117] "RemoveContainer" containerID="06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.195431 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvslk"] Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.206165 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cvslk"] Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.220752 5031 scope.go:117] "RemoveContainer" containerID="1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.256037 5031 scope.go:117] "RemoveContainer" containerID="cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665" Jan 29 09:06:12 crc kubenswrapper[5031]: E0129 09:06:12.256914 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665\": container with ID starting with cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665 not found: ID does not exist" containerID="cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.256994 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665"} err="failed to get container status \"cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665\": rpc error: code = NotFound desc = could not find container \"cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665\": container with ID starting with cfa0cfa84a1e387d94986f77ff4f9d6f954eb896278fd0bc610731eeb7e05665 not found: ID does not exist" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.257024 5031 scope.go:117] "RemoveContainer" containerID="06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba" Jan 29 09:06:12 crc kubenswrapper[5031]: E0129 09:06:12.257396 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba\": container with ID starting with 06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba not found: ID does not exist" containerID="06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.257542 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba"} err="failed to get container status \"06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba\": rpc error: code = NotFound desc = could not find container \"06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba\": container with ID starting with 06770e0ce616138af8a2bad9b1570606cfc000e6c8c2bd47cd6ffca3168f94ba not found: ID does not exist" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.257658 5031 scope.go:117] "RemoveContainer" containerID="1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53" Jan 29 09:06:12 crc kubenswrapper[5031]: E0129 09:06:12.258077 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53\": container with ID starting with 1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53 not found: ID does not exist" containerID="1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.258185 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53"} err="failed to get container status \"1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53\": rpc error: code = NotFound desc = could not find container \"1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53\": container with ID starting with 1e5ad03b14431653abb8ea8b41acd183e047408ef0678857d951dd9ef9a97c53 not found: ID does not exist" Jan 29 09:06:12 crc kubenswrapper[5031]: I0129 09:06:12.293631 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" path="/var/lib/kubelet/pods/c84ccdd0-53c4-4f20-ae9b-c5376dce245b/volumes" Jan 29 09:06:18 crc kubenswrapper[5031]: I0129 09:06:18.282742 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:06:18 crc kubenswrapper[5031]: E0129 09:06:18.283586 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:06:32 crc kubenswrapper[5031]: I0129 09:06:32.282704 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:06:32 crc kubenswrapper[5031]: E0129 09:06:32.283620 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:06:46 crc kubenswrapper[5031]: I0129 09:06:46.282717 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:06:46 crc kubenswrapper[5031]: E0129 09:06:46.283430 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:06:53 crc kubenswrapper[5031]: I0129 09:06:53.524839 5031 generic.go:334] "Generic (PLEG): container finished" podID="7e3c382e-3da7-4a2f-8227-e2986b1c28df" containerID="635763c6313da26b6259243e30bb5998eeab71b9dcef1435c8bced51628b5bfe" exitCode=0 Jan 29 09:06:53 crc kubenswrapper[5031]: I0129 09:06:53.524910 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" event={"ID":"7e3c382e-3da7-4a2f-8227-e2986b1c28df","Type":"ContainerDied","Data":"635763c6313da26b6259243e30bb5998eeab71b9dcef1435c8bced51628b5bfe"} Jan 29 09:06:54 crc kubenswrapper[5031]: I0129 09:06:54.957908 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.140548 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-inventory\") pod \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.140721 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-bootstrap-combined-ca-bundle\") pod \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.140914 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzbjd\" (UniqueName: \"kubernetes.io/projected/7e3c382e-3da7-4a2f-8227-e2986b1c28df-kube-api-access-jzbjd\") pod \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.141006 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-ssh-key-openstack-edpm-ipam\") pod \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\" (UID: \"7e3c382e-3da7-4a2f-8227-e2986b1c28df\") " Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.146884 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3c382e-3da7-4a2f-8227-e2986b1c28df-kube-api-access-jzbjd" (OuterVolumeSpecName: "kube-api-access-jzbjd") pod "7e3c382e-3da7-4a2f-8227-e2986b1c28df" (UID: "7e3c382e-3da7-4a2f-8227-e2986b1c28df"). InnerVolumeSpecName "kube-api-access-jzbjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.146875 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7e3c382e-3da7-4a2f-8227-e2986b1c28df" (UID: "7e3c382e-3da7-4a2f-8227-e2986b1c28df"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.168781 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-inventory" (OuterVolumeSpecName: "inventory") pod "7e3c382e-3da7-4a2f-8227-e2986b1c28df" (UID: "7e3c382e-3da7-4a2f-8227-e2986b1c28df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.176456 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7e3c382e-3da7-4a2f-8227-e2986b1c28df" (UID: "7e3c382e-3da7-4a2f-8227-e2986b1c28df"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.243402 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.243443 5031 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.243456 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzbjd\" (UniqueName: \"kubernetes.io/projected/7e3c382e-3da7-4a2f-8227-e2986b1c28df-kube-api-access-jzbjd\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.243465 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e3c382e-3da7-4a2f-8227-e2986b1c28df-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.545515 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" event={"ID":"7e3c382e-3da7-4a2f-8227-e2986b1c28df","Type":"ContainerDied","Data":"9a7c8001f1eead929e9020e989f4f3f6267a45a884b1b72600c36bc1fe6d69de"} Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.545811 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a7c8001f1eead929e9020e989f4f3f6267a45a884b1b72600c36bc1fe6d69de" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.545574 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.644234 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z"] Jan 29 09:06:55 crc kubenswrapper[5031]: E0129 09:06:55.644943 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="extract-utilities" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.644971 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="extract-utilities" Jan 29 09:06:55 crc kubenswrapper[5031]: E0129 09:06:55.644998 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="registry-server" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.645008 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="registry-server" Jan 29 09:06:55 crc kubenswrapper[5031]: E0129 09:06:55.645027 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="extract-content" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.645037 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="extract-content" Jan 29 09:06:55 crc kubenswrapper[5031]: E0129 09:06:55.645067 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3c382e-3da7-4a2f-8227-e2986b1c28df" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.645077 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3c382e-3da7-4a2f-8227-e2986b1c28df" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.645286 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3c382e-3da7-4a2f-8227-e2986b1c28df" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.645308 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84ccdd0-53c4-4f20-ae9b-c5376dce245b" containerName="registry-server" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.646183 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.649263 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.650739 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.650794 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjtnc\" (UniqueName: \"kubernetes.io/projected/ddf53718-01d7-424d-a46a-949b1aff7342-kube-api-access-fjtnc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.650895 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.653239 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z"] Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.654412 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.654565 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.655191 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.752551 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.752602 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjtnc\" (UniqueName: \"kubernetes.io/projected/ddf53718-01d7-424d-a46a-949b1aff7342-kube-api-access-fjtnc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.752653 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.757215 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.757466 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.768681 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjtnc\" (UniqueName: \"kubernetes.io/projected/ddf53718-01d7-424d-a46a-949b1aff7342-kube-api-access-fjtnc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-djn6z\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:55 crc kubenswrapper[5031]: I0129 09:06:55.964724 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:06:56 crc kubenswrapper[5031]: I0129 09:06:56.493894 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z"] Jan 29 09:06:56 crc kubenswrapper[5031]: I0129 09:06:56.556589 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" event={"ID":"ddf53718-01d7-424d-a46a-949b1aff7342","Type":"ContainerStarted","Data":"9717bc9fa133a3b56d53669e1f0fe6a185c62acee80a4a8d92acf398dd9a0ae3"} Jan 29 09:06:57 crc kubenswrapper[5031]: I0129 09:06:57.569333 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" event={"ID":"ddf53718-01d7-424d-a46a-949b1aff7342","Type":"ContainerStarted","Data":"09ac403fa797218fc8b7c014fcfbbc85a6ee80f8e5c4841aaffe56da4133934d"} Jan 29 09:06:57 crc kubenswrapper[5031]: I0129 09:06:57.593882 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" podStartSLOduration=2.140477717 podStartE2EDuration="2.593857563s" podCreationTimestamp="2026-01-29 09:06:55 +0000 UTC" firstStartedPulling="2026-01-29 09:06:56.499916024 +0000 UTC m=+1696.999503976" lastFinishedPulling="2026-01-29 09:06:56.95329587 +0000 UTC m=+1697.452883822" observedRunningTime="2026-01-29 09:06:57.583548617 +0000 UTC m=+1698.083136579" watchObservedRunningTime="2026-01-29 09:06:57.593857563 +0000 UTC m=+1698.093445515" Jan 29 09:06:59 crc kubenswrapper[5031]: I0129 09:06:59.283494 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:06:59 crc kubenswrapper[5031]: E0129 09:06:59.284039 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:07:10 crc kubenswrapper[5031]: I0129 09:07:10.289007 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:07:10 crc kubenswrapper[5031]: E0129 09:07:10.289838 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:07:22 crc kubenswrapper[5031]: I0129 09:07:22.283725 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:07:22 crc kubenswrapper[5031]: E0129 09:07:22.284560 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:07:35 crc kubenswrapper[5031]: I0129 09:07:35.283694 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:07:35 crc kubenswrapper[5031]: E0129 09:07:35.284550 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:07:50 crc kubenswrapper[5031]: I0129 09:07:50.302057 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:07:50 crc kubenswrapper[5031]: E0129 09:07:50.302904 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.051462 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-63c5-account-create-update-rrljl"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.072908 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nsfns"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.084553 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dmnkv"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.092882 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-66e1-account-create-update-mrd9n"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.101917 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nsfns"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.112035 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-63c5-account-create-update-rrljl"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.121013 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-66e1-account-create-update-mrd9n"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.131600 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dmnkv"] Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.301660 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a" path="/var/lib/kubelet/pods/385ba6d3-a61c-413d-bb8f-ff1d5ccd6a2a/volumes" Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.302712 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fee06e8-d5a9-4552-9f69-353f9666a3f2" path="/var/lib/kubelet/pods/3fee06e8-d5a9-4552-9f69-353f9666a3f2/volumes" Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.303545 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670cace3-776d-44d9-91d9-fdcdd5ba1c89" path="/var/lib/kubelet/pods/670cace3-776d-44d9-91d9-fdcdd5ba1c89/volumes" Jan 29 09:07:54 crc kubenswrapper[5031]: I0129 09:07:54.304184 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8aff16d-f588-4c13-be4a-f2cc4bef00df" path="/var/lib/kubelet/pods/b8aff16d-f588-4c13-be4a-f2cc4bef00df/volumes" Jan 29 09:07:55 crc kubenswrapper[5031]: I0129 09:07:55.032858 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-j8wj5"] Jan 29 09:07:55 crc kubenswrapper[5031]: I0129 09:07:55.044273 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-04ac-account-create-update-8pqhd"] Jan 29 09:07:55 crc kubenswrapper[5031]: I0129 09:07:55.053853 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-j8wj5"] Jan 29 09:07:55 crc kubenswrapper[5031]: I0129 09:07:55.064563 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-04ac-account-create-update-8pqhd"] Jan 29 09:07:56 crc kubenswrapper[5031]: I0129 09:07:56.292993 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4401fb39-e95c-475e-8f56-c251f9f2247f" path="/var/lib/kubelet/pods/4401fb39-e95c-475e-8f56-c251f9f2247f/volumes" Jan 29 09:07:56 crc kubenswrapper[5031]: I0129 09:07:56.294173 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7863fb67-80a0-474b-9b3a-f75062688a55" path="/var/lib/kubelet/pods/7863fb67-80a0-474b-9b3a-f75062688a55/volumes" Jan 29 09:08:00 crc kubenswrapper[5031]: I0129 09:08:00.520825 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" event={"ID":"ddf53718-01d7-424d-a46a-949b1aff7342","Type":"ContainerDied","Data":"09ac403fa797218fc8b7c014fcfbbc85a6ee80f8e5c4841aaffe56da4133934d"} Jan 29 09:08:00 crc kubenswrapper[5031]: I0129 09:08:00.520766 5031 generic.go:334] "Generic (PLEG): container finished" podID="ddf53718-01d7-424d-a46a-949b1aff7342" containerID="09ac403fa797218fc8b7c014fcfbbc85a6ee80f8e5c4841aaffe56da4133934d" exitCode=0 Jan 29 09:08:01 crc kubenswrapper[5031]: I0129 09:08:01.927721 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:08:01 crc kubenswrapper[5031]: I0129 09:08:01.982985 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-ssh-key-openstack-edpm-ipam\") pod \"ddf53718-01d7-424d-a46a-949b1aff7342\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " Jan 29 09:08:01 crc kubenswrapper[5031]: I0129 09:08:01.983177 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-inventory\") pod \"ddf53718-01d7-424d-a46a-949b1aff7342\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " Jan 29 09:08:01 crc kubenswrapper[5031]: I0129 09:08:01.983217 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjtnc\" (UniqueName: \"kubernetes.io/projected/ddf53718-01d7-424d-a46a-949b1aff7342-kube-api-access-fjtnc\") pod \"ddf53718-01d7-424d-a46a-949b1aff7342\" (UID: \"ddf53718-01d7-424d-a46a-949b1aff7342\") " Jan 29 09:08:01 crc kubenswrapper[5031]: I0129 09:08:01.989730 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf53718-01d7-424d-a46a-949b1aff7342-kube-api-access-fjtnc" (OuterVolumeSpecName: "kube-api-access-fjtnc") pod "ddf53718-01d7-424d-a46a-949b1aff7342" (UID: "ddf53718-01d7-424d-a46a-949b1aff7342"). InnerVolumeSpecName "kube-api-access-fjtnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.016340 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-inventory" (OuterVolumeSpecName: "inventory") pod "ddf53718-01d7-424d-a46a-949b1aff7342" (UID: "ddf53718-01d7-424d-a46a-949b1aff7342"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.023199 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ddf53718-01d7-424d-a46a-949b1aff7342" (UID: "ddf53718-01d7-424d-a46a-949b1aff7342"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.084785 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.084820 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjtnc\" (UniqueName: \"kubernetes.io/projected/ddf53718-01d7-424d-a46a-949b1aff7342-kube-api-access-fjtnc\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.084835 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf53718-01d7-424d-a46a-949b1aff7342-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.540185 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" event={"ID":"ddf53718-01d7-424d-a46a-949b1aff7342","Type":"ContainerDied","Data":"9717bc9fa133a3b56d53669e1f0fe6a185c62acee80a4a8d92acf398dd9a0ae3"} Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.540242 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9717bc9fa133a3b56d53669e1f0fe6a185c62acee80a4a8d92acf398dd9a0ae3" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.540332 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.627339 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4"] Jan 29 09:08:02 crc kubenswrapper[5031]: E0129 09:08:02.627789 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf53718-01d7-424d-a46a-949b1aff7342" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.627808 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf53718-01d7-424d-a46a-949b1aff7342" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.628024 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf53718-01d7-424d-a46a-949b1aff7342" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.628790 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.633261 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.633404 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.633582 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.638493 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.640263 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4"] Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.699516 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2bwg\" (UniqueName: \"kubernetes.io/projected/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-kube-api-access-m2bwg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.699699 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.699892 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.802656 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.802850 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.803026 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2bwg\" (UniqueName: \"kubernetes.io/projected/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-kube-api-access-m2bwg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.810699 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.811711 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.822506 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2bwg\" (UniqueName: \"kubernetes.io/projected/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-kube-api-access-m2bwg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:02 crc kubenswrapper[5031]: I0129 09:08:02.947180 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:03 crc kubenswrapper[5031]: I0129 09:08:03.282127 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:08:03 crc kubenswrapper[5031]: E0129 09:08:03.282781 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:08:03 crc kubenswrapper[5031]: I0129 09:08:03.500826 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4"] Jan 29 09:08:03 crc kubenswrapper[5031]: I0129 09:08:03.515583 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:08:03 crc kubenswrapper[5031]: I0129 09:08:03.552056 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" event={"ID":"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0","Type":"ContainerStarted","Data":"c16a9f0a70b1a3f6793470ade710029271b9f6d16b645741e2d79710dcbfe68c"} Jan 29 09:08:04 crc kubenswrapper[5031]: I0129 09:08:04.564247 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" event={"ID":"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0","Type":"ContainerStarted","Data":"8a6617cdf9345bf6b24edcd4faa783fff277baa8327d9925e0ba06fe17e947af"} Jan 29 09:08:06 crc kubenswrapper[5031]: I0129 09:08:06.415976 5031 scope.go:117] "RemoveContainer" containerID="be0a93399a4854262399f0a1ee1dedec38e8192ec116fa3b01b011375fe8b7af" Jan 29 09:08:06 crc kubenswrapper[5031]: I0129 09:08:06.451835 5031 scope.go:117] "RemoveContainer" containerID="d27a2e4a1417aad474522323cd55b645a0cbdee017f8a1eca19ac943c03430cf" Jan 29 09:08:06 crc kubenswrapper[5031]: I0129 09:08:06.484507 5031 scope.go:117] "RemoveContainer" containerID="a69d1a6e5c97193ffce3f3eb6768a0f1a11f9370b6aacfcd10e772597294fbab" Jan 29 09:08:06 crc kubenswrapper[5031]: I0129 09:08:06.534092 5031 scope.go:117] "RemoveContainer" containerID="60dfb9aa64b85c3cab9504d2ba64c04a2bb226b42153795c932af735c8855450" Jan 29 09:08:06 crc kubenswrapper[5031]: I0129 09:08:06.579019 5031 scope.go:117] "RemoveContainer" containerID="20962b37eaf7a67d3307bc2d81d0178c4ad97215d53f4a87129505fee765c996" Jan 29 09:08:06 crc kubenswrapper[5031]: I0129 09:08:06.658583 5031 scope.go:117] "RemoveContainer" containerID="bdbde1af0deb68734a82d570d681d6d66b939ea269ef9332a082762330fb319b" Jan 29 09:08:06 crc kubenswrapper[5031]: I0129 09:08:06.738607 5031 scope.go:117] "RemoveContainer" containerID="53e59f6e812140e255503664f91c80519b4f73f97c3aa25b86d202420c769ef4" Jan 29 09:08:09 crc kubenswrapper[5031]: I0129 09:08:09.618415 5031 generic.go:334] "Generic (PLEG): container finished" podID="e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" containerID="8a6617cdf9345bf6b24edcd4faa783fff277baa8327d9925e0ba06fe17e947af" exitCode=0 Jan 29 09:08:09 crc kubenswrapper[5031]: I0129 09:08:09.618501 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" event={"ID":"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0","Type":"ContainerDied","Data":"8a6617cdf9345bf6b24edcd4faa783fff277baa8327d9925e0ba06fe17e947af"} Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.083706 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.189272 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2bwg\" (UniqueName: \"kubernetes.io/projected/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-kube-api-access-m2bwg\") pod \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.189535 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-ssh-key-openstack-edpm-ipam\") pod \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.189579 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-inventory\") pod \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\" (UID: \"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0\") " Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.214596 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-kube-api-access-m2bwg" (OuterVolumeSpecName: "kube-api-access-m2bwg") pod "e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" (UID: "e5bd9e7d-e031-479b-a5cc-62bdce4ecce0"). InnerVolumeSpecName "kube-api-access-m2bwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.225626 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" (UID: "e5bd9e7d-e031-479b-a5cc-62bdce4ecce0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.226488 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-inventory" (OuterVolumeSpecName: "inventory") pod "e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" (UID: "e5bd9e7d-e031-479b-a5cc-62bdce4ecce0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.292950 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.293023 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.293040 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2bwg\" (UniqueName: \"kubernetes.io/projected/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0-kube-api-access-m2bwg\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.638804 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" event={"ID":"e5bd9e7d-e031-479b-a5cc-62bdce4ecce0","Type":"ContainerDied","Data":"c16a9f0a70b1a3f6793470ade710029271b9f6d16b645741e2d79710dcbfe68c"} Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.638853 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16a9f0a70b1a3f6793470ade710029271b9f6d16b645741e2d79710dcbfe68c" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.638905 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.729180 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp"] Jan 29 09:08:11 crc kubenswrapper[5031]: E0129 09:08:11.729802 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.729828 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.730083 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.730882 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.737260 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp"] Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.738454 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.738972 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.739078 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.739860 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.803235 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbnq7\" (UniqueName: \"kubernetes.io/projected/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-kube-api-access-tbnq7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.803324 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.803450 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.905021 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.905560 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbnq7\" (UniqueName: \"kubernetes.io/projected/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-kube-api-access-tbnq7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.905772 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.910174 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.910226 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:11 crc kubenswrapper[5031]: I0129 09:08:11.925270 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbnq7\" (UniqueName: \"kubernetes.io/projected/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-kube-api-access-tbnq7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l89fp\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:12 crc kubenswrapper[5031]: I0129 09:08:12.046950 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:12 crc kubenswrapper[5031]: I0129 09:08:12.568767 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp"] Jan 29 09:08:12 crc kubenswrapper[5031]: I0129 09:08:12.648763 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" event={"ID":"175bc64f-fc57-46b4-bdff-f0fdfaa062ae","Type":"ContainerStarted","Data":"48c36966e58272b9baedc96f3900bb553c919684667563e18abb4f522e0545c9"} Jan 29 09:08:13 crc kubenswrapper[5031]: I0129 09:08:13.660039 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" event={"ID":"175bc64f-fc57-46b4-bdff-f0fdfaa062ae","Type":"ContainerStarted","Data":"f8cb3450da831e50f65bd45c2fe072f0e9658654138e584e854b6130807ec146"} Jan 29 09:08:13 crc kubenswrapper[5031]: I0129 09:08:13.685020 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" podStartSLOduration=2.283871408 podStartE2EDuration="2.684997859s" podCreationTimestamp="2026-01-29 09:08:11 +0000 UTC" firstStartedPulling="2026-01-29 09:08:12.57452085 +0000 UTC m=+1773.074108802" lastFinishedPulling="2026-01-29 09:08:12.975647301 +0000 UTC m=+1773.475235253" observedRunningTime="2026-01-29 09:08:13.676404715 +0000 UTC m=+1774.175992667" watchObservedRunningTime="2026-01-29 09:08:13.684997859 +0000 UTC m=+1774.184585811" Jan 29 09:08:14 crc kubenswrapper[5031]: I0129 09:08:14.283515 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:08:14 crc kubenswrapper[5031]: E0129 09:08:14.283833 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:08:21 crc kubenswrapper[5031]: I0129 09:08:21.041762 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f748g"] Jan 29 09:08:21 crc kubenswrapper[5031]: I0129 09:08:21.051894 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f748g"] Jan 29 09:08:22 crc kubenswrapper[5031]: I0129 09:08:22.295121 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b1baa5-afbb-47ea-837a-8a69a979a417" path="/var/lib/kubelet/pods/97b1baa5-afbb-47ea-837a-8a69a979a417/volumes" Jan 29 09:08:25 crc kubenswrapper[5031]: I0129 09:08:25.283132 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:08:25 crc kubenswrapper[5031]: E0129 09:08:25.283725 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.031501 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5nwnb"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.046906 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5nwnb"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.059571 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zc5qg"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.070597 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zc5qg"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.080115 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-z4jdg"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.088001 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-72de-account-create-update-9fgbw"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.096261 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-z4jdg"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.104851 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3512-account-create-update-44zf5"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.113174 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-72de-account-create-update-9fgbw"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.123693 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3512-account-create-update-44zf5"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.133818 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3c6d-account-create-update-vkqsr"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.142134 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3c6d-account-create-update-vkqsr"] Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.293882 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008e23fd-2d25-4f4f-bf2e-441c840521e4" path="/var/lib/kubelet/pods/008e23fd-2d25-4f4f-bf2e-441c840521e4/volumes" Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.294809 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0405ca10-f433-4290-a19b-5bb83028e6ae" path="/var/lib/kubelet/pods/0405ca10-f433-4290-a19b-5bb83028e6ae/volumes" Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.295504 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba0a8bed-92bc-406b-b79a-f922b405c505" path="/var/lib/kubelet/pods/ba0a8bed-92bc-406b-b79a-f922b405c505/volumes" Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.296227 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49" path="/var/lib/kubelet/pods/bcaa6f76-d7c7-41e5-a54d-d1fc36e63c49/volumes" Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.297456 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1bc99f-ba99-439c-b71b-9652c34f6248" path="/var/lib/kubelet/pods/bd1bc99f-ba99-439c-b71b-9652c34f6248/volumes" Jan 29 09:08:36 crc kubenswrapper[5031]: I0129 09:08:36.298480 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d41a2f87-dbe3-4248-80d3-70df130c9a2d" path="/var/lib/kubelet/pods/d41a2f87-dbe3-4248-80d3-70df130c9a2d/volumes" Jan 29 09:08:38 crc kubenswrapper[5031]: I0129 09:08:38.282577 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:08:38 crc kubenswrapper[5031]: E0129 09:08:38.283116 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:08:42 crc kubenswrapper[5031]: I0129 09:08:42.029394 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-k647w"] Jan 29 09:08:42 crc kubenswrapper[5031]: I0129 09:08:42.039521 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-k647w"] Jan 29 09:08:42 crc kubenswrapper[5031]: I0129 09:08:42.299581 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085d265b-4cdb-44ae-8a06-fa3962a5546b" path="/var/lib/kubelet/pods/085d265b-4cdb-44ae-8a06-fa3962a5546b/volumes" Jan 29 09:08:48 crc kubenswrapper[5031]: I0129 09:08:48.188126 5031 generic.go:334] "Generic (PLEG): container finished" podID="175bc64f-fc57-46b4-bdff-f0fdfaa062ae" containerID="f8cb3450da831e50f65bd45c2fe072f0e9658654138e584e854b6130807ec146" exitCode=0 Jan 29 09:08:48 crc kubenswrapper[5031]: I0129 09:08:48.188214 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" event={"ID":"175bc64f-fc57-46b4-bdff-f0fdfaa062ae","Type":"ContainerDied","Data":"f8cb3450da831e50f65bd45c2fe072f0e9658654138e584e854b6130807ec146"} Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.282666 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:08:49 crc kubenswrapper[5031]: E0129 09:08:49.282936 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.617500 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.745754 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbnq7\" (UniqueName: \"kubernetes.io/projected/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-kube-api-access-tbnq7\") pod \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.745821 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-ssh-key-openstack-edpm-ipam\") pod \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.745869 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-inventory\") pod \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\" (UID: \"175bc64f-fc57-46b4-bdff-f0fdfaa062ae\") " Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.751962 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-kube-api-access-tbnq7" (OuterVolumeSpecName: "kube-api-access-tbnq7") pod "175bc64f-fc57-46b4-bdff-f0fdfaa062ae" (UID: "175bc64f-fc57-46b4-bdff-f0fdfaa062ae"). InnerVolumeSpecName "kube-api-access-tbnq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.772195 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-inventory" (OuterVolumeSpecName: "inventory") pod "175bc64f-fc57-46b4-bdff-f0fdfaa062ae" (UID: "175bc64f-fc57-46b4-bdff-f0fdfaa062ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.778612 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "175bc64f-fc57-46b4-bdff-f0fdfaa062ae" (UID: "175bc64f-fc57-46b4-bdff-f0fdfaa062ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.847914 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbnq7\" (UniqueName: \"kubernetes.io/projected/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-kube-api-access-tbnq7\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.847951 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:49 crc kubenswrapper[5031]: I0129 09:08:49.847962 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/175bc64f-fc57-46b4-bdff-f0fdfaa062ae-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.211046 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" event={"ID":"175bc64f-fc57-46b4-bdff-f0fdfaa062ae","Type":"ContainerDied","Data":"48c36966e58272b9baedc96f3900bb553c919684667563e18abb4f522e0545c9"} Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.211518 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48c36966e58272b9baedc96f3900bb553c919684667563e18abb4f522e0545c9" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.211125 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.356597 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg"] Jan 29 09:08:50 crc kubenswrapper[5031]: E0129 09:08:50.357931 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175bc64f-fc57-46b4-bdff-f0fdfaa062ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.358048 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="175bc64f-fc57-46b4-bdff-f0fdfaa062ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.358591 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="175bc64f-fc57-46b4-bdff-f0fdfaa062ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.359732 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.366452 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.366457 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.367440 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg"] Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.370187 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.370784 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.467574 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.467816 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.467907 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlzrb\" (UniqueName: \"kubernetes.io/projected/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-kube-api-access-xlzrb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.570482 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.570537 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlzrb\" (UniqueName: \"kubernetes.io/projected/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-kube-api-access-xlzrb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.570593 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.575807 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.581383 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.598281 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlzrb\" (UniqueName: \"kubernetes.io/projected/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-kube-api-access-xlzrb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:50 crc kubenswrapper[5031]: I0129 09:08:50.697786 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:51 crc kubenswrapper[5031]: I0129 09:08:51.051138 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9cqv6"] Jan 29 09:08:51 crc kubenswrapper[5031]: I0129 09:08:51.065295 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9cqv6"] Jan 29 09:08:51 crc kubenswrapper[5031]: I0129 09:08:51.832008 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg"] Jan 29 09:08:52 crc kubenswrapper[5031]: I0129 09:08:52.229464 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" event={"ID":"0f2e9a25-c16f-4e14-9803-25cb31fa3d20","Type":"ContainerStarted","Data":"be67f91ca618bbd5ca75d60bec2623664cd953d02e0751181a9a8cebf81a17aa"} Jan 29 09:08:52 crc kubenswrapper[5031]: I0129 09:08:52.296003 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73ab584-3221-45b8-bc6b-d979c88e8454" path="/var/lib/kubelet/pods/b73ab584-3221-45b8-bc6b-d979c88e8454/volumes" Jan 29 09:08:53 crc kubenswrapper[5031]: I0129 09:08:53.239051 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" event={"ID":"0f2e9a25-c16f-4e14-9803-25cb31fa3d20","Type":"ContainerStarted","Data":"60cb8b21072500335c4e00b8c3edaadda68181b79c9a4fe719cddb84fc520d15"} Jan 29 09:08:53 crc kubenswrapper[5031]: I0129 09:08:53.262150 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" podStartSLOduration=2.791979238 podStartE2EDuration="3.262128155s" podCreationTimestamp="2026-01-29 09:08:50 +0000 UTC" firstStartedPulling="2026-01-29 09:08:51.838031024 +0000 UTC m=+1812.337618976" lastFinishedPulling="2026-01-29 09:08:52.308179941 +0000 UTC m=+1812.807767893" observedRunningTime="2026-01-29 09:08:53.256122192 +0000 UTC m=+1813.755710164" watchObservedRunningTime="2026-01-29 09:08:53.262128155 +0000 UTC m=+1813.761716107" Jan 29 09:08:56 crc kubenswrapper[5031]: I0129 09:08:56.268189 5031 generic.go:334] "Generic (PLEG): container finished" podID="0f2e9a25-c16f-4e14-9803-25cb31fa3d20" containerID="60cb8b21072500335c4e00b8c3edaadda68181b79c9a4fe719cddb84fc520d15" exitCode=0 Jan 29 09:08:56 crc kubenswrapper[5031]: I0129 09:08:56.268264 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" event={"ID":"0f2e9a25-c16f-4e14-9803-25cb31fa3d20","Type":"ContainerDied","Data":"60cb8b21072500335c4e00b8c3edaadda68181b79c9a4fe719cddb84fc520d15"} Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.659342 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.818979 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlzrb\" (UniqueName: \"kubernetes.io/projected/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-kube-api-access-xlzrb\") pod \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.819149 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-inventory\") pod \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.819280 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-ssh-key-openstack-edpm-ipam\") pod \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\" (UID: \"0f2e9a25-c16f-4e14-9803-25cb31fa3d20\") " Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.825334 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-kube-api-access-xlzrb" (OuterVolumeSpecName: "kube-api-access-xlzrb") pod "0f2e9a25-c16f-4e14-9803-25cb31fa3d20" (UID: "0f2e9a25-c16f-4e14-9803-25cb31fa3d20"). InnerVolumeSpecName "kube-api-access-xlzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.847772 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-inventory" (OuterVolumeSpecName: "inventory") pod "0f2e9a25-c16f-4e14-9803-25cb31fa3d20" (UID: "0f2e9a25-c16f-4e14-9803-25cb31fa3d20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.848580 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f2e9a25-c16f-4e14-9803-25cb31fa3d20" (UID: "0f2e9a25-c16f-4e14-9803-25cb31fa3d20"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.922240 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlzrb\" (UniqueName: \"kubernetes.io/projected/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-kube-api-access-xlzrb\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.922301 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:57 crc kubenswrapper[5031]: I0129 09:08:57.922316 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f2e9a25-c16f-4e14-9803-25cb31fa3d20-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.290856 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.303753 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg" event={"ID":"0f2e9a25-c16f-4e14-9803-25cb31fa3d20","Type":"ContainerDied","Data":"be67f91ca618bbd5ca75d60bec2623664cd953d02e0751181a9a8cebf81a17aa"} Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.303806 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be67f91ca618bbd5ca75d60bec2623664cd953d02e0751181a9a8cebf81a17aa" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.364938 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8"] Jan 29 09:08:58 crc kubenswrapper[5031]: E0129 09:08:58.365624 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2e9a25-c16f-4e14-9803-25cb31fa3d20" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.365684 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2e9a25-c16f-4e14-9803-25cb31fa3d20" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.365930 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2e9a25-c16f-4e14-9803-25cb31fa3d20" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.366671 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.414469 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.423847 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.424191 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.424337 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.458897 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8"] Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.535277 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.535334 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqn8\" (UniqueName: \"kubernetes.io/projected/23b883de-aa7a-4b1c-90a7-238ccd739cee-kube-api-access-pcqn8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.535705 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.637595 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.637744 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.637766 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqn8\" (UniqueName: \"kubernetes.io/projected/23b883de-aa7a-4b1c-90a7-238ccd739cee-kube-api-access-pcqn8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.642880 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.648392 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.654510 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqn8\" (UniqueName: \"kubernetes.io/projected/23b883de-aa7a-4b1c-90a7-238ccd739cee-kube-api-access-pcqn8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:58 crc kubenswrapper[5031]: I0129 09:08:58.731091 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:08:59 crc kubenswrapper[5031]: I0129 09:08:59.240815 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8"] Jan 29 09:08:59 crc kubenswrapper[5031]: I0129 09:08:59.301107 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" event={"ID":"23b883de-aa7a-4b1c-90a7-238ccd739cee","Type":"ContainerStarted","Data":"1a9f7ed860816ca5fe8dcc5f70149ca50c80170f55a88675984dfbc0d6321d95"} Jan 29 09:09:00 crc kubenswrapper[5031]: I0129 09:09:00.349200 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" event={"ID":"23b883de-aa7a-4b1c-90a7-238ccd739cee","Type":"ContainerStarted","Data":"3a09c906266b1f4456ca128f4724e92883764d60e9282266c6ea03368fd9fe65"} Jan 29 09:09:00 crc kubenswrapper[5031]: I0129 09:09:00.373824 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" podStartSLOduration=1.953381456 podStartE2EDuration="2.373803412s" podCreationTimestamp="2026-01-29 09:08:58 +0000 UTC" firstStartedPulling="2026-01-29 09:08:59.249706933 +0000 UTC m=+1819.749294875" lastFinishedPulling="2026-01-29 09:08:59.670128879 +0000 UTC m=+1820.169716831" observedRunningTime="2026-01-29 09:09:00.37078528 +0000 UTC m=+1820.870373232" watchObservedRunningTime="2026-01-29 09:09:00.373803412 +0000 UTC m=+1820.873391364" Jan 29 09:09:01 crc kubenswrapper[5031]: I0129 09:09:01.282724 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:09:01 crc kubenswrapper[5031]: E0129 09:09:01.283087 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:09:06 crc kubenswrapper[5031]: I0129 09:09:06.918851 5031 scope.go:117] "RemoveContainer" containerID="f9a892f4c750bf4858ea30a933b87615db491108148098369028287b7790229a" Jan 29 09:09:06 crc kubenswrapper[5031]: I0129 09:09:06.943179 5031 scope.go:117] "RemoveContainer" containerID="e5ce0336f09c671175d918727f693dee3368638b73f1a7be8e21276c01b55de4" Jan 29 09:09:07 crc kubenswrapper[5031]: I0129 09:09:07.002059 5031 scope.go:117] "RemoveContainer" containerID="f081159778cdd195a946d271bf87e8ef2b36c2073dd9dcb40cc0729d08c84321" Jan 29 09:09:07 crc kubenswrapper[5031]: I0129 09:09:07.048526 5031 scope.go:117] "RemoveContainer" containerID="8892212ccb2a90e581f8442c663c443cfc8a28dcb5877c5e9b5696e6aae795aa" Jan 29 09:09:07 crc kubenswrapper[5031]: I0129 09:09:07.123214 5031 scope.go:117] "RemoveContainer" containerID="2115dab52bd1173809a834812d002b98a57b060bfa7b57239e9e2aaa5832cbff" Jan 29 09:09:07 crc kubenswrapper[5031]: I0129 09:09:07.159922 5031 scope.go:117] "RemoveContainer" containerID="8a5fad5f695365328f59e95f5299e07e8b7b5f7ae4cc9fae45767a6d7ddddf0a" Jan 29 09:09:07 crc kubenswrapper[5031]: I0129 09:09:07.192148 5031 scope.go:117] "RemoveContainer" containerID="8543e2110daee4bd7cd5c6c9a2366083953514f8a21b2ba08a92c7630d527ddc" Jan 29 09:09:07 crc kubenswrapper[5031]: I0129 09:09:07.229217 5031 scope.go:117] "RemoveContainer" containerID="c4735564d79a518e62d8f8ce6c55f8d95e52d4c497fd4c062d0434952438b4de" Jan 29 09:09:07 crc kubenswrapper[5031]: I0129 09:09:07.253922 5031 scope.go:117] "RemoveContainer" containerID="fafa7e1e88abda2228f34616541cda446adb5c89fa3c0827f9e04718c8668293" Jan 29 09:09:13 crc kubenswrapper[5031]: I0129 09:09:13.031910 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hrprs"] Jan 29 09:09:13 crc kubenswrapper[5031]: I0129 09:09:13.041750 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hrprs"] Jan 29 09:09:14 crc kubenswrapper[5031]: I0129 09:09:14.035725 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8q6zp"] Jan 29 09:09:14 crc kubenswrapper[5031]: I0129 09:09:14.044070 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8q6zp"] Jan 29 09:09:14 crc kubenswrapper[5031]: I0129 09:09:14.282725 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:09:14 crc kubenswrapper[5031]: E0129 09:09:14.283141 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:09:14 crc kubenswrapper[5031]: I0129 09:09:14.309703 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328b04fe-0ab5-45ab-8c94-239a7221575a" path="/var/lib/kubelet/pods/328b04fe-0ab5-45ab-8c94-239a7221575a/volumes" Jan 29 09:09:14 crc kubenswrapper[5031]: I0129 09:09:14.310526 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d57cae-e6a0-4e2d-8509-a19fa68fcf25" path="/var/lib/kubelet/pods/64d57cae-e6a0-4e2d-8509-a19fa68fcf25/volumes" Jan 29 09:09:15 crc kubenswrapper[5031]: I0129 09:09:15.029611 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tkg9p"] Jan 29 09:09:15 crc kubenswrapper[5031]: I0129 09:09:15.038674 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tkg9p"] Jan 29 09:09:16 crc kubenswrapper[5031]: I0129 09:09:16.295603 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea99f3b-a67a-4077-aba2-d6a5910779f3" path="/var/lib/kubelet/pods/8ea99f3b-a67a-4077-aba2-d6a5910779f3/volumes" Jan 29 09:09:25 crc kubenswrapper[5031]: I0129 09:09:25.283924 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:09:25 crc kubenswrapper[5031]: E0129 09:09:25.284414 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:09:28 crc kubenswrapper[5031]: I0129 09:09:28.058979 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kmdhl"] Jan 29 09:09:28 crc kubenswrapper[5031]: I0129 09:09:28.067575 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kmdhl"] Jan 29 09:09:28 crc kubenswrapper[5031]: I0129 09:09:28.316074 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e96bab-0ee6-41af-9223-9f510ad5bbec" path="/var/lib/kubelet/pods/66e96bab-0ee6-41af-9223-9f510ad5bbec/volumes" Jan 29 09:09:38 crc kubenswrapper[5031]: I0129 09:09:38.039552 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xg72z"] Jan 29 09:09:38 crc kubenswrapper[5031]: I0129 09:09:38.049491 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xg72z"] Jan 29 09:09:38 crc kubenswrapper[5031]: I0129 09:09:38.294758 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="997a6082-d87d-4954-b383-9b27e161be4e" path="/var/lib/kubelet/pods/997a6082-d87d-4954-b383-9b27e161be4e/volumes" Jan 29 09:09:40 crc kubenswrapper[5031]: I0129 09:09:40.292985 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:09:40 crc kubenswrapper[5031]: I0129 09:09:40.712142 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"a2acc74ee720b814c3f073501dcc1696fdab5641a210791634c92a90252cb4dd"} Jan 29 09:09:45 crc kubenswrapper[5031]: I0129 09:09:45.821692 5031 generic.go:334] "Generic (PLEG): container finished" podID="23b883de-aa7a-4b1c-90a7-238ccd739cee" containerID="3a09c906266b1f4456ca128f4724e92883764d60e9282266c6ea03368fd9fe65" exitCode=0 Jan 29 09:09:45 crc kubenswrapper[5031]: I0129 09:09:45.821747 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" event={"ID":"23b883de-aa7a-4b1c-90a7-238ccd739cee","Type":"ContainerDied","Data":"3a09c906266b1f4456ca128f4724e92883764d60e9282266c6ea03368fd9fe65"} Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.270453 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.440003 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-inventory\") pod \"23b883de-aa7a-4b1c-90a7-238ccd739cee\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.440129 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcqn8\" (UniqueName: \"kubernetes.io/projected/23b883de-aa7a-4b1c-90a7-238ccd739cee-kube-api-access-pcqn8\") pod \"23b883de-aa7a-4b1c-90a7-238ccd739cee\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.440162 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-ssh-key-openstack-edpm-ipam\") pod \"23b883de-aa7a-4b1c-90a7-238ccd739cee\" (UID: \"23b883de-aa7a-4b1c-90a7-238ccd739cee\") " Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.446430 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b883de-aa7a-4b1c-90a7-238ccd739cee-kube-api-access-pcqn8" (OuterVolumeSpecName: "kube-api-access-pcqn8") pod "23b883de-aa7a-4b1c-90a7-238ccd739cee" (UID: "23b883de-aa7a-4b1c-90a7-238ccd739cee"). InnerVolumeSpecName "kube-api-access-pcqn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.467487 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23b883de-aa7a-4b1c-90a7-238ccd739cee" (UID: "23b883de-aa7a-4b1c-90a7-238ccd739cee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.468573 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-inventory" (OuterVolumeSpecName: "inventory") pod "23b883de-aa7a-4b1c-90a7-238ccd739cee" (UID: "23b883de-aa7a-4b1c-90a7-238ccd739cee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.542055 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcqn8\" (UniqueName: \"kubernetes.io/projected/23b883de-aa7a-4b1c-90a7-238ccd739cee-kube-api-access-pcqn8\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.542294 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.542391 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b883de-aa7a-4b1c-90a7-238ccd739cee-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.842195 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" event={"ID":"23b883de-aa7a-4b1c-90a7-238ccd739cee","Type":"ContainerDied","Data":"1a9f7ed860816ca5fe8dcc5f70149ca50c80170f55a88675984dfbc0d6321d95"} Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.842244 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9f7ed860816ca5fe8dcc5f70149ca50c80170f55a88675984dfbc0d6321d95" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.842265 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.930039 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p7vjw"] Jan 29 09:09:47 crc kubenswrapper[5031]: E0129 09:09:47.930655 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b883de-aa7a-4b1c-90a7-238ccd739cee" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.930716 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b883de-aa7a-4b1c-90a7-238ccd739cee" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.930953 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b883de-aa7a-4b1c-90a7-238ccd739cee" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.931603 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.934108 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.934852 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.935027 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.935792 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.946843 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p7vjw"] Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.948660 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.948742 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:47 crc kubenswrapper[5031]: I0129 09:09:47.948857 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zl4g\" (UniqueName: \"kubernetes.io/projected/175dad89-fb7c-4769-8cc1-e475fbeac1f1-kube-api-access-2zl4g\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.050743 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.050826 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.050863 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zl4g\" (UniqueName: \"kubernetes.io/projected/175dad89-fb7c-4769-8cc1-e475fbeac1f1-kube-api-access-2zl4g\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.058439 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.062828 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.071072 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zl4g\" (UniqueName: \"kubernetes.io/projected/175dad89-fb7c-4769-8cc1-e475fbeac1f1-kube-api-access-2zl4g\") pod \"ssh-known-hosts-edpm-deployment-p7vjw\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.251169 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.811849 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p7vjw"] Jan 29 09:09:48 crc kubenswrapper[5031]: W0129 09:09:48.820497 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod175dad89_fb7c_4769_8cc1_e475fbeac1f1.slice/crio-81a414c578fbdb69dedf581014999a341955783897a6db72ac1f50fbcc0b2f96 WatchSource:0}: Error finding container 81a414c578fbdb69dedf581014999a341955783897a6db72ac1f50fbcc0b2f96: Status 404 returned error can't find the container with id 81a414c578fbdb69dedf581014999a341955783897a6db72ac1f50fbcc0b2f96 Jan 29 09:09:48 crc kubenswrapper[5031]: I0129 09:09:48.852960 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" event={"ID":"175dad89-fb7c-4769-8cc1-e475fbeac1f1","Type":"ContainerStarted","Data":"81a414c578fbdb69dedf581014999a341955783897a6db72ac1f50fbcc0b2f96"} Jan 29 09:09:50 crc kubenswrapper[5031]: I0129 09:09:50.873795 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" event={"ID":"175dad89-fb7c-4769-8cc1-e475fbeac1f1","Type":"ContainerStarted","Data":"b4823c109eed2aeca406a2657b6c873d5824f4aa2e6afcbf6c5d0aaad89d577d"} Jan 29 09:09:50 crc kubenswrapper[5031]: I0129 09:09:50.897830 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" podStartSLOduration=3.122268996 podStartE2EDuration="3.897808642s" podCreationTimestamp="2026-01-29 09:09:47 +0000 UTC" firstStartedPulling="2026-01-29 09:09:48.825503165 +0000 UTC m=+1869.325091117" lastFinishedPulling="2026-01-29 09:09:49.601042811 +0000 UTC m=+1870.100630763" observedRunningTime="2026-01-29 09:09:50.890443372 +0000 UTC m=+1871.390031324" watchObservedRunningTime="2026-01-29 09:09:50.897808642 +0000 UTC m=+1871.397396604" Jan 29 09:09:56 crc kubenswrapper[5031]: I0129 09:09:56.925068 5031 generic.go:334] "Generic (PLEG): container finished" podID="175dad89-fb7c-4769-8cc1-e475fbeac1f1" containerID="b4823c109eed2aeca406a2657b6c873d5824f4aa2e6afcbf6c5d0aaad89d577d" exitCode=0 Jan 29 09:09:56 crc kubenswrapper[5031]: I0129 09:09:56.925172 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" event={"ID":"175dad89-fb7c-4769-8cc1-e475fbeac1f1","Type":"ContainerDied","Data":"b4823c109eed2aeca406a2657b6c873d5824f4aa2e6afcbf6c5d0aaad89d577d"} Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.343307 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.448742 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-inventory-0\") pod \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.448846 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-ssh-key-openstack-edpm-ipam\") pod \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.448900 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zl4g\" (UniqueName: \"kubernetes.io/projected/175dad89-fb7c-4769-8cc1-e475fbeac1f1-kube-api-access-2zl4g\") pod \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\" (UID: \"175dad89-fb7c-4769-8cc1-e475fbeac1f1\") " Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.455578 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175dad89-fb7c-4769-8cc1-e475fbeac1f1-kube-api-access-2zl4g" (OuterVolumeSpecName: "kube-api-access-2zl4g") pod "175dad89-fb7c-4769-8cc1-e475fbeac1f1" (UID: "175dad89-fb7c-4769-8cc1-e475fbeac1f1"). InnerVolumeSpecName "kube-api-access-2zl4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.476527 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "175dad89-fb7c-4769-8cc1-e475fbeac1f1" (UID: "175dad89-fb7c-4769-8cc1-e475fbeac1f1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.496632 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "175dad89-fb7c-4769-8cc1-e475fbeac1f1" (UID: "175dad89-fb7c-4769-8cc1-e475fbeac1f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.551495 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zl4g\" (UniqueName: \"kubernetes.io/projected/175dad89-fb7c-4769-8cc1-e475fbeac1f1-kube-api-access-2zl4g\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.551851 5031 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.551914 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/175dad89-fb7c-4769-8cc1-e475fbeac1f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.947591 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" event={"ID":"175dad89-fb7c-4769-8cc1-e475fbeac1f1","Type":"ContainerDied","Data":"81a414c578fbdb69dedf581014999a341955783897a6db72ac1f50fbcc0b2f96"} Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.947636 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a414c578fbdb69dedf581014999a341955783897a6db72ac1f50fbcc0b2f96" Jan 29 09:09:58 crc kubenswrapper[5031]: I0129 09:09:58.947695 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p7vjw" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.108633 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m"] Jan 29 09:09:59 crc kubenswrapper[5031]: E0129 09:09:59.109349 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175dad89-fb7c-4769-8cc1-e475fbeac1f1" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.109501 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="175dad89-fb7c-4769-8cc1-e475fbeac1f1" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.109837 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="175dad89-fb7c-4769-8cc1-e475fbeac1f1" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.110778 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.112634 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.114325 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.114611 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.114857 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.118778 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m"] Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.263400 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.263548 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.263781 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bch54\" (UniqueName: \"kubernetes.io/projected/b984cb4e-326f-4b77-8847-e6284ed0f466-kube-api-access-bch54\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.365849 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bch54\" (UniqueName: \"kubernetes.io/projected/b984cb4e-326f-4b77-8847-e6284ed0f466-kube-api-access-bch54\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.365909 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.365933 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.371085 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.381048 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.391095 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bch54\" (UniqueName: \"kubernetes.io/projected/b984cb4e-326f-4b77-8847-e6284ed0f466-kube-api-access-bch54\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mlr4m\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:09:59 crc kubenswrapper[5031]: I0129 09:09:59.426585 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:10:00 crc kubenswrapper[5031]: I0129 09:09:59.999686 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m"] Jan 29 09:10:00 crc kubenswrapper[5031]: I0129 09:10:00.965524 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" event={"ID":"b984cb4e-326f-4b77-8847-e6284ed0f466","Type":"ContainerStarted","Data":"ab2cba68d0af59d32792c704071ea7620cafc0bdb73666d2455514880f1bff01"} Jan 29 09:10:00 crc kubenswrapper[5031]: I0129 09:10:00.966135 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" event={"ID":"b984cb4e-326f-4b77-8847-e6284ed0f466","Type":"ContainerStarted","Data":"abe3a286bdbbd0884f6ce3aeb6dc5422f8c90a3de486508f4736f2aba9eefb1c"} Jan 29 09:10:00 crc kubenswrapper[5031]: I0129 09:10:00.983959 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" podStartSLOduration=1.510712556 podStartE2EDuration="1.983940248s" podCreationTimestamp="2026-01-29 09:09:59 +0000 UTC" firstStartedPulling="2026-01-29 09:10:00.014232976 +0000 UTC m=+1880.513820928" lastFinishedPulling="2026-01-29 09:10:00.487460668 +0000 UTC m=+1880.987048620" observedRunningTime="2026-01-29 09:10:00.980834333 +0000 UTC m=+1881.480422285" watchObservedRunningTime="2026-01-29 09:10:00.983940248 +0000 UTC m=+1881.483528200" Jan 29 09:10:07 crc kubenswrapper[5031]: I0129 09:10:07.427479 5031 scope.go:117] "RemoveContainer" containerID="30e9f3e3171c34b71e8c911f29972049f0e8bddfa4d21a0bc56e048277caa0a7" Jan 29 09:10:07 crc kubenswrapper[5031]: I0129 09:10:07.480956 5031 scope.go:117] "RemoveContainer" containerID="3b44d7401a20ee2f9ed558fc808863f113f097d62a1e4060d09a1879a34e9272" Jan 29 09:10:07 crc kubenswrapper[5031]: I0129 09:10:07.519579 5031 scope.go:117] "RemoveContainer" containerID="56732bde36bf049c1a0eab3361754098e731940fcc5a4fb8dcdf6eb536847818" Jan 29 09:10:07 crc kubenswrapper[5031]: I0129 09:10:07.571735 5031 scope.go:117] "RemoveContainer" containerID="1a3c80a7fac4c3bb26e4b14f65137c3093a803a958279eab49d5317379606b7d" Jan 29 09:10:07 crc kubenswrapper[5031]: I0129 09:10:07.609829 5031 scope.go:117] "RemoveContainer" containerID="73284c02465262e1058676773bdcb3d0c26034d3fb1e649a2ac74546b11c46ed" Jan 29 09:10:09 crc kubenswrapper[5031]: I0129 09:10:09.046279 5031 generic.go:334] "Generic (PLEG): container finished" podID="b984cb4e-326f-4b77-8847-e6284ed0f466" containerID="ab2cba68d0af59d32792c704071ea7620cafc0bdb73666d2455514880f1bff01" exitCode=0 Jan 29 09:10:09 crc kubenswrapper[5031]: I0129 09:10:09.046326 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" event={"ID":"b984cb4e-326f-4b77-8847-e6284ed0f466","Type":"ContainerDied","Data":"ab2cba68d0af59d32792c704071ea7620cafc0bdb73666d2455514880f1bff01"} Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.442654 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.586875 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bch54\" (UniqueName: \"kubernetes.io/projected/b984cb4e-326f-4b77-8847-e6284ed0f466-kube-api-access-bch54\") pod \"b984cb4e-326f-4b77-8847-e6284ed0f466\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.586995 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-ssh-key-openstack-edpm-ipam\") pod \"b984cb4e-326f-4b77-8847-e6284ed0f466\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.587170 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-inventory\") pod \"b984cb4e-326f-4b77-8847-e6284ed0f466\" (UID: \"b984cb4e-326f-4b77-8847-e6284ed0f466\") " Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.592470 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b984cb4e-326f-4b77-8847-e6284ed0f466-kube-api-access-bch54" (OuterVolumeSpecName: "kube-api-access-bch54") pod "b984cb4e-326f-4b77-8847-e6284ed0f466" (UID: "b984cb4e-326f-4b77-8847-e6284ed0f466"). InnerVolumeSpecName "kube-api-access-bch54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.614381 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-inventory" (OuterVolumeSpecName: "inventory") pod "b984cb4e-326f-4b77-8847-e6284ed0f466" (UID: "b984cb4e-326f-4b77-8847-e6284ed0f466"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.616237 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b984cb4e-326f-4b77-8847-e6284ed0f466" (UID: "b984cb4e-326f-4b77-8847-e6284ed0f466"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.689998 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bch54\" (UniqueName: \"kubernetes.io/projected/b984cb4e-326f-4b77-8847-e6284ed0f466-kube-api-access-bch54\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.690046 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:10 crc kubenswrapper[5031]: I0129 09:10:10.690058 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b984cb4e-326f-4b77-8847-e6284ed0f466-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.065299 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" event={"ID":"b984cb4e-326f-4b77-8847-e6284ed0f466","Type":"ContainerDied","Data":"abe3a286bdbbd0884f6ce3aeb6dc5422f8c90a3de486508f4736f2aba9eefb1c"} Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.065341 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe3a286bdbbd0884f6ce3aeb6dc5422f8c90a3de486508f4736f2aba9eefb1c" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.065407 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.201674 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh"] Jan 29 09:10:11 crc kubenswrapper[5031]: E0129 09:10:11.202085 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b984cb4e-326f-4b77-8847-e6284ed0f466" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.202105 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b984cb4e-326f-4b77-8847-e6284ed0f466" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.202337 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b984cb4e-326f-4b77-8847-e6284ed0f466" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.203048 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.205477 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.206194 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.206263 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.208063 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.219596 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh"] Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.301356 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx49x\" (UniqueName: \"kubernetes.io/projected/083e2aae-39f0-429d-af43-0ec893e0c941-kube-api-access-nx49x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.301514 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.301687 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.403462 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx49x\" (UniqueName: \"kubernetes.io/projected/083e2aae-39f0-429d-af43-0ec893e0c941-kube-api-access-nx49x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.403576 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.403777 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.409056 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.415002 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.426125 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx49x\" (UniqueName: \"kubernetes.io/projected/083e2aae-39f0-429d-af43-0ec893e0c941-kube-api-access-nx49x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:11 crc kubenswrapper[5031]: I0129 09:10:11.522193 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:12 crc kubenswrapper[5031]: I0129 09:10:12.056804 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh"] Jan 29 09:10:12 crc kubenswrapper[5031]: I0129 09:10:12.075776 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" event={"ID":"083e2aae-39f0-429d-af43-0ec893e0c941","Type":"ContainerStarted","Data":"df28c186474ac228cbce60e31e03e34ef6bb25386f9883dde08c76a05124da88"} Jan 29 09:10:13 crc kubenswrapper[5031]: I0129 09:10:13.086647 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" event={"ID":"083e2aae-39f0-429d-af43-0ec893e0c941","Type":"ContainerStarted","Data":"3b1e0bae10debce8219a80076459d2368e1de546326793626b2eac3d6f24916d"} Jan 29 09:10:13 crc kubenswrapper[5031]: I0129 09:10:13.107705 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" podStartSLOduration=1.625236882 podStartE2EDuration="2.107687813s" podCreationTimestamp="2026-01-29 09:10:11 +0000 UTC" firstStartedPulling="2026-01-29 09:10:12.060101881 +0000 UTC m=+1892.559689833" lastFinishedPulling="2026-01-29 09:10:12.542552812 +0000 UTC m=+1893.042140764" observedRunningTime="2026-01-29 09:10:13.106722857 +0000 UTC m=+1893.606310809" watchObservedRunningTime="2026-01-29 09:10:13.107687813 +0000 UTC m=+1893.607275755" Jan 29 09:10:22 crc kubenswrapper[5031]: I0129 09:10:22.170766 5031 generic.go:334] "Generic (PLEG): container finished" podID="083e2aae-39f0-429d-af43-0ec893e0c941" containerID="3b1e0bae10debce8219a80076459d2368e1de546326793626b2eac3d6f24916d" exitCode=0 Jan 29 09:10:22 crc kubenswrapper[5031]: I0129 09:10:22.170838 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" event={"ID":"083e2aae-39f0-429d-af43-0ec893e0c941","Type":"ContainerDied","Data":"3b1e0bae10debce8219a80076459d2368e1de546326793626b2eac3d6f24916d"} Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.576990 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.758665 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx49x\" (UniqueName: \"kubernetes.io/projected/083e2aae-39f0-429d-af43-0ec893e0c941-kube-api-access-nx49x\") pod \"083e2aae-39f0-429d-af43-0ec893e0c941\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.758776 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-ssh-key-openstack-edpm-ipam\") pod \"083e2aae-39f0-429d-af43-0ec893e0c941\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.758983 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-inventory\") pod \"083e2aae-39f0-429d-af43-0ec893e0c941\" (UID: \"083e2aae-39f0-429d-af43-0ec893e0c941\") " Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.764347 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083e2aae-39f0-429d-af43-0ec893e0c941-kube-api-access-nx49x" (OuterVolumeSpecName: "kube-api-access-nx49x") pod "083e2aae-39f0-429d-af43-0ec893e0c941" (UID: "083e2aae-39f0-429d-af43-0ec893e0c941"). InnerVolumeSpecName "kube-api-access-nx49x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.785858 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-inventory" (OuterVolumeSpecName: "inventory") pod "083e2aae-39f0-429d-af43-0ec893e0c941" (UID: "083e2aae-39f0-429d-af43-0ec893e0c941"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.786245 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "083e2aae-39f0-429d-af43-0ec893e0c941" (UID: "083e2aae-39f0-429d-af43-0ec893e0c941"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.861213 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.861275 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx49x\" (UniqueName: \"kubernetes.io/projected/083e2aae-39f0-429d-af43-0ec893e0c941-kube-api-access-nx49x\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:23 crc kubenswrapper[5031]: I0129 09:10:23.861293 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/083e2aae-39f0-429d-af43-0ec893e0c941-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:24 crc kubenswrapper[5031]: I0129 09:10:24.189934 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" event={"ID":"083e2aae-39f0-429d-af43-0ec893e0c941","Type":"ContainerDied","Data":"df28c186474ac228cbce60e31e03e34ef6bb25386f9883dde08c76a05124da88"} Jan 29 09:10:24 crc kubenswrapper[5031]: I0129 09:10:24.189986 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df28c186474ac228cbce60e31e03e34ef6bb25386f9883dde08c76a05124da88" Jan 29 09:10:24 crc kubenswrapper[5031]: I0129 09:10:24.190047 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh" Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.063431 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1468-account-create-update-zxksp"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.074469 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-j7vsx"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.082439 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d412-account-create-update-jsj2g"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.144115 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mhwm7"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.156151 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dh6kp"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.163890 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2310-account-create-update-qtfvc"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.171622 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d412-account-create-update-jsj2g"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.179443 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mhwm7"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.191432 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1468-account-create-update-zxksp"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.199447 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-j7vsx"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.208495 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dh6kp"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.214599 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2310-account-create-update-qtfvc"] Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.301451 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1244cfbf-875f-4291-be5d-bf559c363dd0" path="/var/lib/kubelet/pods/1244cfbf-875f-4291-be5d-bf559c363dd0/volumes" Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.302184 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27bdb3af-5e68-4db3-a04a-b8dda8d56d3b" path="/var/lib/kubelet/pods/27bdb3af-5e68-4db3-a04a-b8dda8d56d3b/volumes" Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.304339 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54c8338c-c195-4bac-802a-bfa0ba3a7a35" path="/var/lib/kubelet/pods/54c8338c-c195-4bac-802a-bfa0ba3a7a35/volumes" Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.305025 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a90621-3be5-48e7-a13e-296d459c61c2" path="/var/lib/kubelet/pods/59a90621-3be5-48e7-a13e-296d459c61c2/volumes" Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.305691 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbad268d-467d-4c4e-bdd4-0877a1311246" path="/var/lib/kubelet/pods/bbad268d-467d-4c4e-bdd4-0877a1311246/volumes" Jan 29 09:10:26 crc kubenswrapper[5031]: I0129 09:10:26.311339 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b8d99f-c56f-4874-8540-82a133c05e28" path="/var/lib/kubelet/pods/e6b8d99f-c56f-4874-8540-82a133c05e28/volumes" Jan 29 09:11:02 crc kubenswrapper[5031]: I0129 09:11:02.055884 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jr7x7"] Jan 29 09:11:02 crc kubenswrapper[5031]: I0129 09:11:02.068742 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jr7x7"] Jan 29 09:11:02 crc kubenswrapper[5031]: I0129 09:11:02.293630 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c240bea9-22e4-4a3c-8237-0d09838c72d9" path="/var/lib/kubelet/pods/c240bea9-22e4-4a3c-8237-0d09838c72d9/volumes" Jan 29 09:11:07 crc kubenswrapper[5031]: I0129 09:11:07.754875 5031 scope.go:117] "RemoveContainer" containerID="1f4ccfacd2124f8247600b85bbe7bf621c13fef4ee7dded316dddef5657d3517" Jan 29 09:11:07 crc kubenswrapper[5031]: I0129 09:11:07.777974 5031 scope.go:117] "RemoveContainer" containerID="9863239b29df4efaf6ecf6f0938b7b5029802d64fbb18497679d9d934b437717" Jan 29 09:11:07 crc kubenswrapper[5031]: I0129 09:11:07.820630 5031 scope.go:117] "RemoveContainer" containerID="6872c7a00cc9539aadb44a405693ce20d1719922f39eab6e08fb555030a9ae62" Jan 29 09:11:07 crc kubenswrapper[5031]: I0129 09:11:07.863745 5031 scope.go:117] "RemoveContainer" containerID="a688e531f54eb1d214d8c7658d8334103fe99fb6e0ca70bf24c17119ed692a7a" Jan 29 09:11:07 crc kubenswrapper[5031]: I0129 09:11:07.912362 5031 scope.go:117] "RemoveContainer" containerID="63b1b5ce9df979fd01f1642dcea92c09400990faf3be9c9ab3ffc3cd1e1f7285" Jan 29 09:11:07 crc kubenswrapper[5031]: I0129 09:11:07.947283 5031 scope.go:117] "RemoveContainer" containerID="a251549a584fc8b9cff455b6494c6e42b9aa45b3e0f041d3471f2293a6ad4592" Jan 29 09:11:08 crc kubenswrapper[5031]: I0129 09:11:08.019276 5031 scope.go:117] "RemoveContainer" containerID="8c6e53ac4e0efffeadd04f4d745cff34d60ff0a8b4c9908d4a0cd82132a3d6fe" Jan 29 09:11:26 crc kubenswrapper[5031]: I0129 09:11:26.050191 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wkpdv"] Jan 29 09:11:26 crc kubenswrapper[5031]: I0129 09:11:26.062295 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wkpdv"] Jan 29 09:11:26 crc kubenswrapper[5031]: I0129 09:11:26.293250 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1291d1-c499-4e5b-8aa3-3547c546502c" path="/var/lib/kubelet/pods/be1291d1-c499-4e5b-8aa3-3547c546502c/volumes" Jan 29 09:11:27 crc kubenswrapper[5031]: I0129 09:11:27.029398 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z4wbf"] Jan 29 09:11:27 crc kubenswrapper[5031]: I0129 09:11:27.038547 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z4wbf"] Jan 29 09:11:28 crc kubenswrapper[5031]: I0129 09:11:28.303104 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300251ab-347d-4865-9f56-417ae1fc962e" path="/var/lib/kubelet/pods/300251ab-347d-4865-9f56-417ae1fc962e/volumes" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.151939 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9p2l"] Jan 29 09:12:08 crc kubenswrapper[5031]: E0129 09:12:08.152771 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083e2aae-39f0-429d-af43-0ec893e0c941" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.152783 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="083e2aae-39f0-429d-af43-0ec893e0c941" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.152845 5031 scope.go:117] "RemoveContainer" containerID="0cb106eb3119c6fb35f1cd1ec00a1ef07eb2ec7bc394ec2d35e83d475144b7e4" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.152968 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="083e2aae-39f0-429d-af43-0ec893e0c941" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.158065 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.180788 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9p2l"] Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.233188 5031 scope.go:117] "RemoveContainer" containerID="af21c6b15968681356856bc614dd09edbe62a701468d2fff395ea3f613b05a2e" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.305143 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-catalog-content\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.305187 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-utilities\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.305258 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqkt\" (UniqueName: \"kubernetes.io/projected/d0291b1d-de69-46fb-a31d-ff6fa3678091-kube-api-access-ksqkt\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.407026 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-catalog-content\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.407086 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-utilities\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.407914 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-catalog-content\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.407947 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-utilities\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.408058 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqkt\" (UniqueName: \"kubernetes.io/projected/d0291b1d-de69-46fb-a31d-ff6fa3678091-kube-api-access-ksqkt\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.428576 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqkt\" (UniqueName: \"kubernetes.io/projected/d0291b1d-de69-46fb-a31d-ff6fa3678091-kube-api-access-ksqkt\") pod \"redhat-operators-f9p2l\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.493782 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.493837 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:12:08 crc kubenswrapper[5031]: I0129 09:12:08.540116 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:09 crc kubenswrapper[5031]: I0129 09:12:09.049758 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9p2l"] Jan 29 09:12:09 crc kubenswrapper[5031]: I0129 09:12:09.575123 5031 generic.go:334] "Generic (PLEG): container finished" podID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerID="d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949" exitCode=0 Jan 29 09:12:09 crc kubenswrapper[5031]: I0129 09:12:09.575194 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9p2l" event={"ID":"d0291b1d-de69-46fb-a31d-ff6fa3678091","Type":"ContainerDied","Data":"d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949"} Jan 29 09:12:09 crc kubenswrapper[5031]: I0129 09:12:09.575495 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9p2l" event={"ID":"d0291b1d-de69-46fb-a31d-ff6fa3678091","Type":"ContainerStarted","Data":"52a9e3a4510e0933b6ec646ec93a1a9309c9f35214b2b4cb30aebc39c76762b6"} Jan 29 09:12:10 crc kubenswrapper[5031]: I0129 09:12:10.584272 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9p2l" event={"ID":"d0291b1d-de69-46fb-a31d-ff6fa3678091","Type":"ContainerStarted","Data":"f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb"} Jan 29 09:12:11 crc kubenswrapper[5031]: I0129 09:12:11.592440 5031 generic.go:334] "Generic (PLEG): container finished" podID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerID="f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb" exitCode=0 Jan 29 09:12:11 crc kubenswrapper[5031]: I0129 09:12:11.592486 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9p2l" event={"ID":"d0291b1d-de69-46fb-a31d-ff6fa3678091","Type":"ContainerDied","Data":"f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb"} Jan 29 09:12:12 crc kubenswrapper[5031]: I0129 09:12:12.602795 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9p2l" event={"ID":"d0291b1d-de69-46fb-a31d-ff6fa3678091","Type":"ContainerStarted","Data":"be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903"} Jan 29 09:12:12 crc kubenswrapper[5031]: I0129 09:12:12.623808 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9p2l" podStartSLOduration=1.8716447299999999 podStartE2EDuration="4.623788228s" podCreationTimestamp="2026-01-29 09:12:08 +0000 UTC" firstStartedPulling="2026-01-29 09:12:09.577185354 +0000 UTC m=+2010.076773306" lastFinishedPulling="2026-01-29 09:12:12.329328852 +0000 UTC m=+2012.828916804" observedRunningTime="2026-01-29 09:12:12.620342926 +0000 UTC m=+2013.119930878" watchObservedRunningTime="2026-01-29 09:12:12.623788228 +0000 UTC m=+2013.123376180" Jan 29 09:12:13 crc kubenswrapper[5031]: I0129 09:12:13.049958 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tvcms"] Jan 29 09:12:13 crc kubenswrapper[5031]: I0129 09:12:13.058280 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tvcms"] Jan 29 09:12:14 crc kubenswrapper[5031]: I0129 09:12:14.296783 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2e0d86-555d-42e1-beca-00cd83b2c90a" path="/var/lib/kubelet/pods/7b2e0d86-555d-42e1-beca-00cd83b2c90a/volumes" Jan 29 09:12:18 crc kubenswrapper[5031]: I0129 09:12:18.540775 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:18 crc kubenswrapper[5031]: I0129 09:12:18.541408 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:18 crc kubenswrapper[5031]: I0129 09:12:18.586530 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:18 crc kubenswrapper[5031]: I0129 09:12:18.696411 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:18 crc kubenswrapper[5031]: I0129 09:12:18.867527 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9p2l"] Jan 29 09:12:20 crc kubenswrapper[5031]: I0129 09:12:20.665225 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9p2l" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="registry-server" containerID="cri-o://be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903" gracePeriod=2 Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.118438 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.260543 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-utilities\") pod \"d0291b1d-de69-46fb-a31d-ff6fa3678091\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.260681 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-catalog-content\") pod \"d0291b1d-de69-46fb-a31d-ff6fa3678091\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.260866 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksqkt\" (UniqueName: \"kubernetes.io/projected/d0291b1d-de69-46fb-a31d-ff6fa3678091-kube-api-access-ksqkt\") pod \"d0291b1d-de69-46fb-a31d-ff6fa3678091\" (UID: \"d0291b1d-de69-46fb-a31d-ff6fa3678091\") " Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.261613 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-utilities" (OuterVolumeSpecName: "utilities") pod "d0291b1d-de69-46fb-a31d-ff6fa3678091" (UID: "d0291b1d-de69-46fb-a31d-ff6fa3678091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.268708 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0291b1d-de69-46fb-a31d-ff6fa3678091-kube-api-access-ksqkt" (OuterVolumeSpecName: "kube-api-access-ksqkt") pod "d0291b1d-de69-46fb-a31d-ff6fa3678091" (UID: "d0291b1d-de69-46fb-a31d-ff6fa3678091"). InnerVolumeSpecName "kube-api-access-ksqkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.368012 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.368056 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksqkt\" (UniqueName: \"kubernetes.io/projected/d0291b1d-de69-46fb-a31d-ff6fa3678091-kube-api-access-ksqkt\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.398620 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0291b1d-de69-46fb-a31d-ff6fa3678091" (UID: "d0291b1d-de69-46fb-a31d-ff6fa3678091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.470813 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0291b1d-de69-46fb-a31d-ff6fa3678091-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.680190 5031 generic.go:334] "Generic (PLEG): container finished" podID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerID="be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903" exitCode=0 Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.680242 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9p2l" event={"ID":"d0291b1d-de69-46fb-a31d-ff6fa3678091","Type":"ContainerDied","Data":"be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903"} Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.680280 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9p2l" event={"ID":"d0291b1d-de69-46fb-a31d-ff6fa3678091","Type":"ContainerDied","Data":"52a9e3a4510e0933b6ec646ec93a1a9309c9f35214b2b4cb30aebc39c76762b6"} Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.680301 5031 scope.go:117] "RemoveContainer" containerID="be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.680348 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9p2l" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.711962 5031 scope.go:117] "RemoveContainer" containerID="f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.719556 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9p2l"] Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.731041 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9p2l"] Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.736892 5031 scope.go:117] "RemoveContainer" containerID="d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.779604 5031 scope.go:117] "RemoveContainer" containerID="be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903" Jan 29 09:12:21 crc kubenswrapper[5031]: E0129 09:12:21.780273 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903\": container with ID starting with be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903 not found: ID does not exist" containerID="be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.780331 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903"} err="failed to get container status \"be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903\": rpc error: code = NotFound desc = could not find container \"be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903\": container with ID starting with be1ac016129a03fa7f07f0a589bf9648686ce9e241768e4fed08cf1c25543903 not found: ID does not exist" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.780391 5031 scope.go:117] "RemoveContainer" containerID="f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb" Jan 29 09:12:21 crc kubenswrapper[5031]: E0129 09:12:21.781197 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb\": container with ID starting with f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb not found: ID does not exist" containerID="f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.781245 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb"} err="failed to get container status \"f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb\": rpc error: code = NotFound desc = could not find container \"f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb\": container with ID starting with f11c5cd0fde6a9ed47fd3639e26ae8394951b3900f4335deeedaf5b292be9aeb not found: ID does not exist" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.781292 5031 scope.go:117] "RemoveContainer" containerID="d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949" Jan 29 09:12:21 crc kubenswrapper[5031]: E0129 09:12:21.781736 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949\": container with ID starting with d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949 not found: ID does not exist" containerID="d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949" Jan 29 09:12:21 crc kubenswrapper[5031]: I0129 09:12:21.781772 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949"} err="failed to get container status \"d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949\": rpc error: code = NotFound desc = could not find container \"d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949\": container with ID starting with d7aa416a95827b49c4f0b8e3d7ac62b9045cd27b593577573c8abad43541f949 not found: ID does not exist" Jan 29 09:12:22 crc kubenswrapper[5031]: I0129 09:12:22.298119 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" path="/var/lib/kubelet/pods/d0291b1d-de69-46fb-a31d-ff6fa3678091/volumes" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.236393 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4zd9"] Jan 29 09:12:29 crc kubenswrapper[5031]: E0129 09:12:29.237339 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="registry-server" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.237355 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="registry-server" Jan 29 09:12:29 crc kubenswrapper[5031]: E0129 09:12:29.237390 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="extract-utilities" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.237399 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="extract-utilities" Jan 29 09:12:29 crc kubenswrapper[5031]: E0129 09:12:29.237413 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="extract-content" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.237420 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="extract-content" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.237594 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0291b1d-de69-46fb-a31d-ff6fa3678091" containerName="registry-server" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.239947 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.247762 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4zd9"] Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.263880 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-catalog-content\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.263958 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9r9\" (UniqueName: \"kubernetes.io/projected/00094a8b-f6e5-43aa-ada0-c9c46726e302-kube-api-access-zt9r9\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.263988 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-utilities\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.366648 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-catalog-content\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.366810 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9r9\" (UniqueName: \"kubernetes.io/projected/00094a8b-f6e5-43aa-ada0-c9c46726e302-kube-api-access-zt9r9\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.366931 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-utilities\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.367359 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-catalog-content\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.367407 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-utilities\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.389749 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9r9\" (UniqueName: \"kubernetes.io/projected/00094a8b-f6e5-43aa-ada0-c9c46726e302-kube-api-access-zt9r9\") pod \"redhat-marketplace-j4zd9\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:29 crc kubenswrapper[5031]: I0129 09:12:29.569388 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:30 crc kubenswrapper[5031]: I0129 09:12:30.072489 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4zd9"] Jan 29 09:12:30 crc kubenswrapper[5031]: I0129 09:12:30.797699 5031 generic.go:334] "Generic (PLEG): container finished" podID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerID="4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679" exitCode=0 Jan 29 09:12:30 crc kubenswrapper[5031]: I0129 09:12:30.797777 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4zd9" event={"ID":"00094a8b-f6e5-43aa-ada0-c9c46726e302","Type":"ContainerDied","Data":"4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679"} Jan 29 09:12:30 crc kubenswrapper[5031]: I0129 09:12:30.798083 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4zd9" event={"ID":"00094a8b-f6e5-43aa-ada0-c9c46726e302","Type":"ContainerStarted","Data":"2247be236209433a4655942459acd6394acf2157c80428863891f66ffc26f381"} Jan 29 09:12:31 crc kubenswrapper[5031]: I0129 09:12:31.809311 5031 generic.go:334] "Generic (PLEG): container finished" podID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerID="ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466" exitCode=0 Jan 29 09:12:31 crc kubenswrapper[5031]: I0129 09:12:31.809409 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4zd9" event={"ID":"00094a8b-f6e5-43aa-ada0-c9c46726e302","Type":"ContainerDied","Data":"ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466"} Jan 29 09:12:32 crc kubenswrapper[5031]: I0129 09:12:32.822125 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4zd9" event={"ID":"00094a8b-f6e5-43aa-ada0-c9c46726e302","Type":"ContainerStarted","Data":"5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95"} Jan 29 09:12:32 crc kubenswrapper[5031]: I0129 09:12:32.847225 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4zd9" podStartSLOduration=2.400217629 podStartE2EDuration="3.847205232s" podCreationTimestamp="2026-01-29 09:12:29 +0000 UTC" firstStartedPulling="2026-01-29 09:12:30.801258946 +0000 UTC m=+2031.300846898" lastFinishedPulling="2026-01-29 09:12:32.248246539 +0000 UTC m=+2032.747834501" observedRunningTime="2026-01-29 09:12:32.84527726 +0000 UTC m=+2033.344865222" watchObservedRunningTime="2026-01-29 09:12:32.847205232 +0000 UTC m=+2033.346793184" Jan 29 09:12:38 crc kubenswrapper[5031]: I0129 09:12:38.494052 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:12:38 crc kubenswrapper[5031]: I0129 09:12:38.494741 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:12:39 crc kubenswrapper[5031]: I0129 09:12:39.569684 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:39 crc kubenswrapper[5031]: I0129 09:12:39.569752 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:39 crc kubenswrapper[5031]: I0129 09:12:39.615289 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:39 crc kubenswrapper[5031]: I0129 09:12:39.934333 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:39 crc kubenswrapper[5031]: I0129 09:12:39.994601 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4zd9"] Jan 29 09:12:41 crc kubenswrapper[5031]: I0129 09:12:41.903435 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4zd9" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="registry-server" containerID="cri-o://5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95" gracePeriod=2 Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.345606 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.447990 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-catalog-content\") pod \"00094a8b-f6e5-43aa-ada0-c9c46726e302\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.448040 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt9r9\" (UniqueName: \"kubernetes.io/projected/00094a8b-f6e5-43aa-ada0-c9c46726e302-kube-api-access-zt9r9\") pod \"00094a8b-f6e5-43aa-ada0-c9c46726e302\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.448163 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-utilities\") pod \"00094a8b-f6e5-43aa-ada0-c9c46726e302\" (UID: \"00094a8b-f6e5-43aa-ada0-c9c46726e302\") " Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.449294 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-utilities" (OuterVolumeSpecName: "utilities") pod "00094a8b-f6e5-43aa-ada0-c9c46726e302" (UID: "00094a8b-f6e5-43aa-ada0-c9c46726e302"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.458516 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00094a8b-f6e5-43aa-ada0-c9c46726e302-kube-api-access-zt9r9" (OuterVolumeSpecName: "kube-api-access-zt9r9") pod "00094a8b-f6e5-43aa-ada0-c9c46726e302" (UID: "00094a8b-f6e5-43aa-ada0-c9c46726e302"). InnerVolumeSpecName "kube-api-access-zt9r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.472253 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00094a8b-f6e5-43aa-ada0-c9c46726e302" (UID: "00094a8b-f6e5-43aa-ada0-c9c46726e302"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.549969 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.550237 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt9r9\" (UniqueName: \"kubernetes.io/projected/00094a8b-f6e5-43aa-ada0-c9c46726e302-kube-api-access-zt9r9\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:42 crc kubenswrapper[5031]: I0129 09:12:42.550251 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00094a8b-f6e5-43aa-ada0-c9c46726e302-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.908019 5031 generic.go:334] "Generic (PLEG): container finished" podID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerID="5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95" exitCode=0 Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.908073 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4zd9" event={"ID":"00094a8b-f6e5-43aa-ada0-c9c46726e302","Type":"ContainerDied","Data":"5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95"} Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.908107 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4zd9" event={"ID":"00094a8b-f6e5-43aa-ada0-c9c46726e302","Type":"ContainerDied","Data":"2247be236209433a4655942459acd6394acf2157c80428863891f66ffc26f381"} Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.908127 5031 scope.go:117] "RemoveContainer" containerID="5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95" Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.908322 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4zd9" Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.953543 5031 scope.go:117] "RemoveContainer" containerID="ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466" Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.961024 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4zd9"] Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.973217 5031 scope.go:117] "RemoveContainer" containerID="4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679" Jan 29 09:12:43 crc kubenswrapper[5031]: I0129 09:12:43.981200 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4zd9"] Jan 29 09:12:44 crc kubenswrapper[5031]: I0129 09:12:44.014016 5031 scope.go:117] "RemoveContainer" containerID="5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95" Jan 29 09:12:44 crc kubenswrapper[5031]: E0129 09:12:44.016174 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95\": container with ID starting with 5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95 not found: ID does not exist" containerID="5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95" Jan 29 09:12:44 crc kubenswrapper[5031]: I0129 09:12:44.016214 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95"} err="failed to get container status \"5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95\": rpc error: code = NotFound desc = could not find container \"5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95\": container with ID starting with 5b31811b17e1d715e5020bc15f2ae5a7c3190c27de7522fff2dbaf5eda425e95 not found: ID does not exist" Jan 29 09:12:44 crc kubenswrapper[5031]: I0129 09:12:44.016236 5031 scope.go:117] "RemoveContainer" containerID="ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466" Jan 29 09:12:44 crc kubenswrapper[5031]: E0129 09:12:44.016556 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466\": container with ID starting with ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466 not found: ID does not exist" containerID="ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466" Jan 29 09:12:44 crc kubenswrapper[5031]: I0129 09:12:44.016579 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466"} err="failed to get container status \"ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466\": rpc error: code = NotFound desc = could not find container \"ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466\": container with ID starting with ed9140255c52742372f33ca14f0f20c1f671fdc4b45d94ca81864b7bb8081466 not found: ID does not exist" Jan 29 09:12:44 crc kubenswrapper[5031]: I0129 09:12:44.016592 5031 scope.go:117] "RemoveContainer" containerID="4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679" Jan 29 09:12:44 crc kubenswrapper[5031]: E0129 09:12:44.016872 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679\": container with ID starting with 4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679 not found: ID does not exist" containerID="4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679" Jan 29 09:12:44 crc kubenswrapper[5031]: I0129 09:12:44.016920 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679"} err="failed to get container status \"4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679\": rpc error: code = NotFound desc = could not find container \"4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679\": container with ID starting with 4793bec32a7ffabaf3004d712df4b3bd6de73e46663300faeb8e1399b461d679 not found: ID does not exist" Jan 29 09:12:44 crc kubenswrapper[5031]: I0129 09:12:44.297290 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" path="/var/lib/kubelet/pods/00094a8b-f6e5-43aa-ada0-c9c46726e302/volumes" Jan 29 09:13:08 crc kubenswrapper[5031]: I0129 09:13:08.343794 5031 scope.go:117] "RemoveContainer" containerID="a35c5ca26395119ecd0d07528d5193eabbfe462f97210e6471ebf1fdacc31273" Jan 29 09:13:08 crc kubenswrapper[5031]: I0129 09:13:08.493533 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:13:08 crc kubenswrapper[5031]: I0129 09:13:08.493596 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:13:08 crc kubenswrapper[5031]: I0129 09:13:08.493647 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:13:08 crc kubenswrapper[5031]: I0129 09:13:08.494534 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2acc74ee720b814c3f073501dcc1696fdab5641a210791634c92a90252cb4dd"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:13:08 crc kubenswrapper[5031]: I0129 09:13:08.494592 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://a2acc74ee720b814c3f073501dcc1696fdab5641a210791634c92a90252cb4dd" gracePeriod=600 Jan 29 09:13:09 crc kubenswrapper[5031]: I0129 09:13:09.148065 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="a2acc74ee720b814c3f073501dcc1696fdab5641a210791634c92a90252cb4dd" exitCode=0 Jan 29 09:13:09 crc kubenswrapper[5031]: I0129 09:13:09.148389 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"a2acc74ee720b814c3f073501dcc1696fdab5641a210791634c92a90252cb4dd"} Jan 29 09:13:09 crc kubenswrapper[5031]: I0129 09:13:09.148418 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d"} Jan 29 09:13:09 crc kubenswrapper[5031]: I0129 09:13:09.148443 5031 scope.go:117] "RemoveContainer" containerID="bb2504be6a00f5facd275ce9b0ac54af5f7d45a657633e686133fdc3f7d982fe" Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.293886 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p7vjw"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.297627 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.308336 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.327228 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.341099 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.357911 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p7vjw"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.367042 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mlr4m"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.375152 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-djn6z"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.382361 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.389624 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.399318 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xv5w4"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.409432 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.417806 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.425158 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8k8pg"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.434611 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ccpx8"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.444139 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l89fp"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.453080 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.462261 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9dccd"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.470839 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k5r57"] Jan 29 09:14:28 crc kubenswrapper[5031]: I0129 09:14:28.486068 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxdzh"] Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.294817 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04bc9814-a834-48e6-9096-c233ccd1d5e0" path="/var/lib/kubelet/pods/04bc9814-a834-48e6-9096-c233ccd1d5e0/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.295882 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083e2aae-39f0-429d-af43-0ec893e0c941" path="/var/lib/kubelet/pods/083e2aae-39f0-429d-af43-0ec893e0c941/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.297058 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2e9a25-c16f-4e14-9803-25cb31fa3d20" path="/var/lib/kubelet/pods/0f2e9a25-c16f-4e14-9803-25cb31fa3d20/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.298159 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175bc64f-fc57-46b4-bdff-f0fdfaa062ae" path="/var/lib/kubelet/pods/175bc64f-fc57-46b4-bdff-f0fdfaa062ae/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.299619 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175dad89-fb7c-4769-8cc1-e475fbeac1f1" path="/var/lib/kubelet/pods/175dad89-fb7c-4769-8cc1-e475fbeac1f1/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.300229 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b883de-aa7a-4b1c-90a7-238ccd739cee" path="/var/lib/kubelet/pods/23b883de-aa7a-4b1c-90a7-238ccd739cee/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.301642 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3c382e-3da7-4a2f-8227-e2986b1c28df" path="/var/lib/kubelet/pods/7e3c382e-3da7-4a2f-8227-e2986b1c28df/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.302275 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b984cb4e-326f-4b77-8847-e6284ed0f466" path="/var/lib/kubelet/pods/b984cb4e-326f-4b77-8847-e6284ed0f466/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.303486 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf53718-01d7-424d-a46a-949b1aff7342" path="/var/lib/kubelet/pods/ddf53718-01d7-424d-a46a-949b1aff7342/volumes" Jan 29 09:14:30 crc kubenswrapper[5031]: I0129 09:14:30.304058 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5bd9e7d-e031-479b-a5cc-62bdce4ecce0" path="/var/lib/kubelet/pods/e5bd9e7d-e031-479b-a5cc-62bdce4ecce0/volumes" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.309937 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb"] Jan 29 09:14:34 crc kubenswrapper[5031]: E0129 09:14:34.310883 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="registry-server" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.310900 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="registry-server" Jan 29 09:14:34 crc kubenswrapper[5031]: E0129 09:14:34.310925 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="extract-utilities" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.310931 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="extract-utilities" Jan 29 09:14:34 crc kubenswrapper[5031]: E0129 09:14:34.310949 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="extract-content" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.310958 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="extract-content" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.311156 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="00094a8b-f6e5-43aa-ada0-c9c46726e302" containerName="registry-server" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.311791 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.313682 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.314484 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.314703 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.315847 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.316510 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.335871 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb"] Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.505830 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.506341 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.506513 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.506720 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r25bp\" (UniqueName: \"kubernetes.io/projected/b62042d2-d6ae-42b6-abaa-b08bdb66257d-kube-api-access-r25bp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.506867 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.609082 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.609197 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r25bp\" (UniqueName: \"kubernetes.io/projected/b62042d2-d6ae-42b6-abaa-b08bdb66257d-kube-api-access-r25bp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.609244 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.609311 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.609381 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.618288 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.618575 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.618792 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.624303 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.628499 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r25bp\" (UniqueName: \"kubernetes.io/projected/b62042d2-d6ae-42b6-abaa-b08bdb66257d-kube-api-access-r25bp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:34 crc kubenswrapper[5031]: I0129 09:14:34.636514 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:35 crc kubenswrapper[5031]: I0129 09:14:35.163744 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb"] Jan 29 09:14:35 crc kubenswrapper[5031]: I0129 09:14:35.168201 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:14:35 crc kubenswrapper[5031]: I0129 09:14:35.247642 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" event={"ID":"b62042d2-d6ae-42b6-abaa-b08bdb66257d","Type":"ContainerStarted","Data":"50909885ee7a77afbbd04d788ff5c1610026725f615d9a3458b7acbdd4e0cc76"} Jan 29 09:14:36 crc kubenswrapper[5031]: I0129 09:14:36.257900 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" event={"ID":"b62042d2-d6ae-42b6-abaa-b08bdb66257d","Type":"ContainerStarted","Data":"3de2a5862967c0332f101e902dd69d7c60b065e76d4d6278d62cf9cb2d3cf965"} Jan 29 09:14:36 crc kubenswrapper[5031]: I0129 09:14:36.277291 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" podStartSLOduration=1.774429034 podStartE2EDuration="2.277271895s" podCreationTimestamp="2026-01-29 09:14:34 +0000 UTC" firstStartedPulling="2026-01-29 09:14:35.167833853 +0000 UTC m=+2155.667421815" lastFinishedPulling="2026-01-29 09:14:35.670676724 +0000 UTC m=+2156.170264676" observedRunningTime="2026-01-29 09:14:36.272983238 +0000 UTC m=+2156.772571210" watchObservedRunningTime="2026-01-29 09:14:36.277271895 +0000 UTC m=+2156.776859847" Jan 29 09:14:48 crc kubenswrapper[5031]: I0129 09:14:48.367086 5031 generic.go:334] "Generic (PLEG): container finished" podID="b62042d2-d6ae-42b6-abaa-b08bdb66257d" containerID="3de2a5862967c0332f101e902dd69d7c60b065e76d4d6278d62cf9cb2d3cf965" exitCode=0 Jan 29 09:14:48 crc kubenswrapper[5031]: I0129 09:14:48.367254 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" event={"ID":"b62042d2-d6ae-42b6-abaa-b08bdb66257d","Type":"ContainerDied","Data":"3de2a5862967c0332f101e902dd69d7c60b065e76d4d6278d62cf9cb2d3cf965"} Jan 29 09:14:49 crc kubenswrapper[5031]: I0129 09:14:49.894907 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.007218 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ceph\") pod \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.007315 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-repo-setup-combined-ca-bundle\") pod \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.007407 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-inventory\") pod \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.007501 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r25bp\" (UniqueName: \"kubernetes.io/projected/b62042d2-d6ae-42b6-abaa-b08bdb66257d-kube-api-access-r25bp\") pod \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.007624 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ssh-key-openstack-edpm-ipam\") pod \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\" (UID: \"b62042d2-d6ae-42b6-abaa-b08bdb66257d\") " Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.014115 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62042d2-d6ae-42b6-abaa-b08bdb66257d-kube-api-access-r25bp" (OuterVolumeSpecName: "kube-api-access-r25bp") pod "b62042d2-d6ae-42b6-abaa-b08bdb66257d" (UID: "b62042d2-d6ae-42b6-abaa-b08bdb66257d"). InnerVolumeSpecName "kube-api-access-r25bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.014502 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b62042d2-d6ae-42b6-abaa-b08bdb66257d" (UID: "b62042d2-d6ae-42b6-abaa-b08bdb66257d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.015582 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ceph" (OuterVolumeSpecName: "ceph") pod "b62042d2-d6ae-42b6-abaa-b08bdb66257d" (UID: "b62042d2-d6ae-42b6-abaa-b08bdb66257d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.037463 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-inventory" (OuterVolumeSpecName: "inventory") pod "b62042d2-d6ae-42b6-abaa-b08bdb66257d" (UID: "b62042d2-d6ae-42b6-abaa-b08bdb66257d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.039678 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b62042d2-d6ae-42b6-abaa-b08bdb66257d" (UID: "b62042d2-d6ae-42b6-abaa-b08bdb66257d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.110247 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.110296 5031 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.110310 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.110321 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r25bp\" (UniqueName: \"kubernetes.io/projected/b62042d2-d6ae-42b6-abaa-b08bdb66257d-kube-api-access-r25bp\") on node \"crc\" DevicePath \"\"" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.110333 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b62042d2-d6ae-42b6-abaa-b08bdb66257d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.387849 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" event={"ID":"b62042d2-d6ae-42b6-abaa-b08bdb66257d","Type":"ContainerDied","Data":"50909885ee7a77afbbd04d788ff5c1610026725f615d9a3458b7acbdd4e0cc76"} Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.388228 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50909885ee7a77afbbd04d788ff5c1610026725f615d9a3458b7acbdd4e0cc76" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.387900 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.463339 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd"] Jan 29 09:14:50 crc kubenswrapper[5031]: E0129 09:14:50.463792 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62042d2-d6ae-42b6-abaa-b08bdb66257d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.463812 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62042d2-d6ae-42b6-abaa-b08bdb66257d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.464023 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62042d2-d6ae-42b6-abaa-b08bdb66257d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.464656 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.467791 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.468262 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.468436 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.468593 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.468873 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.478252 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd"] Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.619660 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptcv\" (UniqueName: \"kubernetes.io/projected/91b928d8-c43f-4fa6-b673-62b42f2c88a1-kube-api-access-4ptcv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.619731 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.619872 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.619917 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.619972 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.722273 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.722394 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.722452 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.722501 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptcv\" (UniqueName: \"kubernetes.io/projected/91b928d8-c43f-4fa6-b673-62b42f2c88a1-kube-api-access-4ptcv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.722539 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.727784 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.732487 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.732930 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.741810 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.743909 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptcv\" (UniqueName: \"kubernetes.io/projected/91b928d8-c43f-4fa6-b673-62b42f2c88a1-kube-api-access-4ptcv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:50 crc kubenswrapper[5031]: I0129 09:14:50.800633 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:14:51 crc kubenswrapper[5031]: I0129 09:14:51.370217 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd"] Jan 29 09:14:51 crc kubenswrapper[5031]: W0129 09:14:51.380545 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91b928d8_c43f_4fa6_b673_62b42f2c88a1.slice/crio-31d545c029f891987ddc7f9f03cf276ec0ac15e0235cc38cdb34bcdc43d29dde WatchSource:0}: Error finding container 31d545c029f891987ddc7f9f03cf276ec0ac15e0235cc38cdb34bcdc43d29dde: Status 404 returned error can't find the container with id 31d545c029f891987ddc7f9f03cf276ec0ac15e0235cc38cdb34bcdc43d29dde Jan 29 09:14:51 crc kubenswrapper[5031]: I0129 09:14:51.397790 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" event={"ID":"91b928d8-c43f-4fa6-b673-62b42f2c88a1","Type":"ContainerStarted","Data":"31d545c029f891987ddc7f9f03cf276ec0ac15e0235cc38cdb34bcdc43d29dde"} Jan 29 09:14:52 crc kubenswrapper[5031]: I0129 09:14:52.407295 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" event={"ID":"91b928d8-c43f-4fa6-b673-62b42f2c88a1","Type":"ContainerStarted","Data":"1b03b34fc4e78b9ff7fd3d6802d17772e8c838d1539ddda91bcefc750ea8096a"} Jan 29 09:14:52 crc kubenswrapper[5031]: I0129 09:14:52.426569 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" podStartSLOduration=2.030581645 podStartE2EDuration="2.42654261s" podCreationTimestamp="2026-01-29 09:14:50 +0000 UTC" firstStartedPulling="2026-01-29 09:14:51.383011274 +0000 UTC m=+2171.882599226" lastFinishedPulling="2026-01-29 09:14:51.778972229 +0000 UTC m=+2172.278560191" observedRunningTime="2026-01-29 09:14:52.421589035 +0000 UTC m=+2172.921176987" watchObservedRunningTime="2026-01-29 09:14:52.42654261 +0000 UTC m=+2172.926130562" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.141518 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr"] Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.146387 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.149236 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.149606 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.155124 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr"] Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.236077 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xr6\" (UniqueName: \"kubernetes.io/projected/b3574b9c-234d-4766-a0da-e6cf3ffecf98-kube-api-access-j8xr6\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.236253 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3574b9c-234d-4766-a0da-e6cf3ffecf98-secret-volume\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.236284 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3574b9c-234d-4766-a0da-e6cf3ffecf98-config-volume\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.338752 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xr6\" (UniqueName: \"kubernetes.io/projected/b3574b9c-234d-4766-a0da-e6cf3ffecf98-kube-api-access-j8xr6\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.338880 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3574b9c-234d-4766-a0da-e6cf3ffecf98-secret-volume\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.338912 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3574b9c-234d-4766-a0da-e6cf3ffecf98-config-volume\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.340003 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3574b9c-234d-4766-a0da-e6cf3ffecf98-config-volume\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.349470 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3574b9c-234d-4766-a0da-e6cf3ffecf98-secret-volume\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.360142 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xr6\" (UniqueName: \"kubernetes.io/projected/b3574b9c-234d-4766-a0da-e6cf3ffecf98-kube-api-access-j8xr6\") pod \"collect-profiles-29494635-zzqrr\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.481246 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:00 crc kubenswrapper[5031]: I0129 09:15:00.974397 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr"] Jan 29 09:15:01 crc kubenswrapper[5031]: I0129 09:15:01.484617 5031 generic.go:334] "Generic (PLEG): container finished" podID="b3574b9c-234d-4766-a0da-e6cf3ffecf98" containerID="78f9f400d267a0803b910027c69cfe3b323e290e4ffe4fc29d92a3c803a9da5c" exitCode=0 Jan 29 09:15:01 crc kubenswrapper[5031]: I0129 09:15:01.484722 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" event={"ID":"b3574b9c-234d-4766-a0da-e6cf3ffecf98","Type":"ContainerDied","Data":"78f9f400d267a0803b910027c69cfe3b323e290e4ffe4fc29d92a3c803a9da5c"} Jan 29 09:15:01 crc kubenswrapper[5031]: I0129 09:15:01.485088 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" event={"ID":"b3574b9c-234d-4766-a0da-e6cf3ffecf98","Type":"ContainerStarted","Data":"d80f7ba56f33e447804979041056430e44e4796d484b1f9ddd471668296d97a4"} Jan 29 09:15:02 crc kubenswrapper[5031]: I0129 09:15:02.883711 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:02 crc kubenswrapper[5031]: I0129 09:15:02.995222 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3574b9c-234d-4766-a0da-e6cf3ffecf98-secret-volume\") pod \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " Jan 29 09:15:02 crc kubenswrapper[5031]: I0129 09:15:02.995421 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3574b9c-234d-4766-a0da-e6cf3ffecf98-config-volume\") pod \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " Jan 29 09:15:02 crc kubenswrapper[5031]: I0129 09:15:02.995558 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xr6\" (UniqueName: \"kubernetes.io/projected/b3574b9c-234d-4766-a0da-e6cf3ffecf98-kube-api-access-j8xr6\") pod \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\" (UID: \"b3574b9c-234d-4766-a0da-e6cf3ffecf98\") " Jan 29 09:15:02 crc kubenswrapper[5031]: I0129 09:15:02.996326 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3574b9c-234d-4766-a0da-e6cf3ffecf98-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3574b9c-234d-4766-a0da-e6cf3ffecf98" (UID: "b3574b9c-234d-4766-a0da-e6cf3ffecf98"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.001701 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3574b9c-234d-4766-a0da-e6cf3ffecf98-kube-api-access-j8xr6" (OuterVolumeSpecName: "kube-api-access-j8xr6") pod "b3574b9c-234d-4766-a0da-e6cf3ffecf98" (UID: "b3574b9c-234d-4766-a0da-e6cf3ffecf98"). InnerVolumeSpecName "kube-api-access-j8xr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.006694 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3574b9c-234d-4766-a0da-e6cf3ffecf98-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3574b9c-234d-4766-a0da-e6cf3ffecf98" (UID: "b3574b9c-234d-4766-a0da-e6cf3ffecf98"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.098108 5031 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3574b9c-234d-4766-a0da-e6cf3ffecf98-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.098146 5031 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3574b9c-234d-4766-a0da-e6cf3ffecf98-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.098156 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xr6\" (UniqueName: \"kubernetes.io/projected/b3574b9c-234d-4766-a0da-e6cf3ffecf98-kube-api-access-j8xr6\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.503141 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" event={"ID":"b3574b9c-234d-4766-a0da-e6cf3ffecf98","Type":"ContainerDied","Data":"d80f7ba56f33e447804979041056430e44e4796d484b1f9ddd471668296d97a4"} Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.503191 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80f7ba56f33e447804979041056430e44e4796d484b1f9ddd471668296d97a4" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.503194 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzqrr" Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.972992 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz"] Jan 29 09:15:03 crc kubenswrapper[5031]: I0129 09:15:03.981255 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-g4jbz"] Jan 29 09:15:04 crc kubenswrapper[5031]: I0129 09:15:04.297180 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dba2693e-b691-45ea-9447-95fc1da261ed" path="/var/lib/kubelet/pods/dba2693e-b691-45ea-9447-95fc1da261ed/volumes" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.073341 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x79n9"] Jan 29 09:15:08 crc kubenswrapper[5031]: E0129 09:15:08.075321 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3574b9c-234d-4766-a0da-e6cf3ffecf98" containerName="collect-profiles" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.075338 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3574b9c-234d-4766-a0da-e6cf3ffecf98" containerName="collect-profiles" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.075568 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3574b9c-234d-4766-a0da-e6cf3ffecf98" containerName="collect-profiles" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.077512 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.086729 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x79n9"] Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.207604 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-catalog-content\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.207659 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-utilities\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.207788 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh5mw\" (UniqueName: \"kubernetes.io/projected/6728810f-ca6b-4b34-b89f-ab954ebd9d17-kube-api-access-zh5mw\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.309662 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-catalog-content\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.309715 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-utilities\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.309803 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh5mw\" (UniqueName: \"kubernetes.io/projected/6728810f-ca6b-4b34-b89f-ab954ebd9d17-kube-api-access-zh5mw\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.310758 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-utilities\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.310777 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-catalog-content\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.340672 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh5mw\" (UniqueName: \"kubernetes.io/projected/6728810f-ca6b-4b34-b89f-ab954ebd9d17-kube-api-access-zh5mw\") pod \"community-operators-x79n9\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.398434 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.468132 5031 scope.go:117] "RemoveContainer" containerID="09ac403fa797218fc8b7c014fcfbbc85a6ee80f8e5c4841aaffe56da4133934d" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.493322 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.493517 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.645192 5031 scope.go:117] "RemoveContainer" containerID="635763c6313da26b6259243e30bb5998eeab71b9dcef1435c8bced51628b5bfe" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.816782 5031 scope.go:117] "RemoveContainer" containerID="8a6617cdf9345bf6b24edcd4faa783fff277baa8327d9925e0ba06fe17e947af" Jan 29 09:15:08 crc kubenswrapper[5031]: I0129 09:15:08.908569 5031 scope.go:117] "RemoveContainer" containerID="60cb8b21072500335c4e00b8c3edaadda68181b79c9a4fe719cddb84fc520d15" Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.024783 5031 scope.go:117] "RemoveContainer" containerID="f8cb3450da831e50f65bd45c2fe072f0e9658654138e584e854b6130807ec146" Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.088851 5031 scope.go:117] "RemoveContainer" containerID="1267958ea49b6af110c56a3a00b046ee49d81176aa0d3b6f1891e7e5ad11f881" Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.116635 5031 scope.go:117] "RemoveContainer" containerID="f57ae943bbf74e86e3036199b0d8d647cca0d67f3dd5956ce749836cf1bd085c" Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.138256 5031 scope.go:117] "RemoveContainer" containerID="3a09c906266b1f4456ca128f4724e92883764d60e9282266c6ea03368fd9fe65" Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.138744 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x79n9"] Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.578707 5031 generic.go:334] "Generic (PLEG): container finished" podID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerID="7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a" exitCode=0 Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.578816 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x79n9" event={"ID":"6728810f-ca6b-4b34-b89f-ab954ebd9d17","Type":"ContainerDied","Data":"7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a"} Jan 29 09:15:09 crc kubenswrapper[5031]: I0129 09:15:09.579086 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x79n9" event={"ID":"6728810f-ca6b-4b34-b89f-ab954ebd9d17","Type":"ContainerStarted","Data":"b35f4a312a088292654343e8262647f2bf6333e2422cd2b36f3314a49285bcba"} Jan 29 09:15:10 crc kubenswrapper[5031]: I0129 09:15:10.588164 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x79n9" event={"ID":"6728810f-ca6b-4b34-b89f-ab954ebd9d17","Type":"ContainerStarted","Data":"76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155"} Jan 29 09:15:11 crc kubenswrapper[5031]: I0129 09:15:11.597587 5031 generic.go:334] "Generic (PLEG): container finished" podID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerID="76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155" exitCode=0 Jan 29 09:15:11 crc kubenswrapper[5031]: I0129 09:15:11.597647 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x79n9" event={"ID":"6728810f-ca6b-4b34-b89f-ab954ebd9d17","Type":"ContainerDied","Data":"76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155"} Jan 29 09:15:12 crc kubenswrapper[5031]: I0129 09:15:12.608924 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x79n9" event={"ID":"6728810f-ca6b-4b34-b89f-ab954ebd9d17","Type":"ContainerStarted","Data":"f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac"} Jan 29 09:15:12 crc kubenswrapper[5031]: I0129 09:15:12.629868 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x79n9" podStartSLOduration=2.201783769 podStartE2EDuration="4.629852166s" podCreationTimestamp="2026-01-29 09:15:08 +0000 UTC" firstStartedPulling="2026-01-29 09:15:09.580538961 +0000 UTC m=+2190.080126923" lastFinishedPulling="2026-01-29 09:15:12.008607368 +0000 UTC m=+2192.508195320" observedRunningTime="2026-01-29 09:15:12.628215382 +0000 UTC m=+2193.127803354" watchObservedRunningTime="2026-01-29 09:15:12.629852166 +0000 UTC m=+2193.129440118" Jan 29 09:15:18 crc kubenswrapper[5031]: I0129 09:15:18.399058 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:18 crc kubenswrapper[5031]: I0129 09:15:18.399643 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:18 crc kubenswrapper[5031]: I0129 09:15:18.448549 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:18 crc kubenswrapper[5031]: I0129 09:15:18.798105 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:18 crc kubenswrapper[5031]: I0129 09:15:18.852749 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x79n9"] Jan 29 09:15:20 crc kubenswrapper[5031]: I0129 09:15:20.671393 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x79n9" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="registry-server" containerID="cri-o://f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac" gracePeriod=2 Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.106031 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.193078 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-utilities\") pod \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.193178 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh5mw\" (UniqueName: \"kubernetes.io/projected/6728810f-ca6b-4b34-b89f-ab954ebd9d17-kube-api-access-zh5mw\") pod \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.193356 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-catalog-content\") pod \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\" (UID: \"6728810f-ca6b-4b34-b89f-ab954ebd9d17\") " Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.194172 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-utilities" (OuterVolumeSpecName: "utilities") pod "6728810f-ca6b-4b34-b89f-ab954ebd9d17" (UID: "6728810f-ca6b-4b34-b89f-ab954ebd9d17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.199914 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6728810f-ca6b-4b34-b89f-ab954ebd9d17-kube-api-access-zh5mw" (OuterVolumeSpecName: "kube-api-access-zh5mw") pod "6728810f-ca6b-4b34-b89f-ab954ebd9d17" (UID: "6728810f-ca6b-4b34-b89f-ab954ebd9d17"). InnerVolumeSpecName "kube-api-access-zh5mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.248250 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6728810f-ca6b-4b34-b89f-ab954ebd9d17" (UID: "6728810f-ca6b-4b34-b89f-ab954ebd9d17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.296423 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.296472 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6728810f-ca6b-4b34-b89f-ab954ebd9d17-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.296484 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh5mw\" (UniqueName: \"kubernetes.io/projected/6728810f-ca6b-4b34-b89f-ab954ebd9d17-kube-api-access-zh5mw\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.680308 5031 generic.go:334] "Generic (PLEG): container finished" podID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerID="f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac" exitCode=0 Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.680384 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x79n9" event={"ID":"6728810f-ca6b-4b34-b89f-ab954ebd9d17","Type":"ContainerDied","Data":"f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac"} Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.680414 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x79n9" event={"ID":"6728810f-ca6b-4b34-b89f-ab954ebd9d17","Type":"ContainerDied","Data":"b35f4a312a088292654343e8262647f2bf6333e2422cd2b36f3314a49285bcba"} Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.680430 5031 scope.go:117] "RemoveContainer" containerID="f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.680574 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x79n9" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.703481 5031 scope.go:117] "RemoveContainer" containerID="76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.726483 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x79n9"] Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.738130 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x79n9"] Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.743641 5031 scope.go:117] "RemoveContainer" containerID="7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.785908 5031 scope.go:117] "RemoveContainer" containerID="f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac" Jan 29 09:15:21 crc kubenswrapper[5031]: E0129 09:15:21.786353 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac\": container with ID starting with f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac not found: ID does not exist" containerID="f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.786449 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac"} err="failed to get container status \"f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac\": rpc error: code = NotFound desc = could not find container \"f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac\": container with ID starting with f89de33cceffb775b742cfc56bd96229b47fc0be150dd3fed1fd97b449e601ac not found: ID does not exist" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.786478 5031 scope.go:117] "RemoveContainer" containerID="76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155" Jan 29 09:15:21 crc kubenswrapper[5031]: E0129 09:15:21.786748 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155\": container with ID starting with 76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155 not found: ID does not exist" containerID="76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.786772 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155"} err="failed to get container status \"76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155\": rpc error: code = NotFound desc = could not find container \"76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155\": container with ID starting with 76a6021dc6fb65f7b7f7039593b0108ada674379bf1b30424d3ef5baff7f6155 not found: ID does not exist" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.786788 5031 scope.go:117] "RemoveContainer" containerID="7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a" Jan 29 09:15:21 crc kubenswrapper[5031]: E0129 09:15:21.787031 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a\": container with ID starting with 7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a not found: ID does not exist" containerID="7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a" Jan 29 09:15:21 crc kubenswrapper[5031]: I0129 09:15:21.787080 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a"} err="failed to get container status \"7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a\": rpc error: code = NotFound desc = could not find container \"7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a\": container with ID starting with 7635d60aece038849c32f614b71fea7f1c0461c7abcfd1f56c1a3fa822d89f5a not found: ID does not exist" Jan 29 09:15:22 crc kubenswrapper[5031]: I0129 09:15:22.295170 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" path="/var/lib/kubelet/pods/6728810f-ca6b-4b34-b89f-ab954ebd9d17/volumes" Jan 29 09:15:38 crc kubenswrapper[5031]: I0129 09:15:38.493507 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:15:38 crc kubenswrapper[5031]: I0129 09:15:38.494163 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:16:08 crc kubenswrapper[5031]: I0129 09:16:08.494938 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:16:08 crc kubenswrapper[5031]: I0129 09:16:08.495410 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:16:08 crc kubenswrapper[5031]: I0129 09:16:08.495454 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:16:08 crc kubenswrapper[5031]: I0129 09:16:08.496188 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:16:08 crc kubenswrapper[5031]: I0129 09:16:08.496233 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" gracePeriod=600 Jan 29 09:16:08 crc kubenswrapper[5031]: E0129 09:16:08.617216 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:16:09 crc kubenswrapper[5031]: I0129 09:16:09.035785 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" exitCode=0 Jan 29 09:16:09 crc kubenswrapper[5031]: I0129 09:16:09.035861 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d"} Jan 29 09:16:09 crc kubenswrapper[5031]: I0129 09:16:09.036290 5031 scope.go:117] "RemoveContainer" containerID="a2acc74ee720b814c3f073501dcc1696fdab5641a210791634c92a90252cb4dd" Jan 29 09:16:09 crc kubenswrapper[5031]: I0129 09:16:09.037143 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:16:09 crc kubenswrapper[5031]: E0129 09:16:09.037530 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:16:09 crc kubenswrapper[5031]: I0129 09:16:09.359863 5031 scope.go:117] "RemoveContainer" containerID="b4823c109eed2aeca406a2657b6c873d5824f4aa2e6afcbf6c5d0aaad89d577d" Jan 29 09:16:09 crc kubenswrapper[5031]: I0129 09:16:09.393302 5031 scope.go:117] "RemoveContainer" containerID="ab2cba68d0af59d32792c704071ea7620cafc0bdb73666d2455514880f1bff01" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.027301 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bc7zr"] Jan 29 09:16:21 crc kubenswrapper[5031]: E0129 09:16:21.028905 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="extract-content" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.028931 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="extract-content" Jan 29 09:16:21 crc kubenswrapper[5031]: E0129 09:16:21.028952 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="registry-server" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.028964 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="registry-server" Jan 29 09:16:21 crc kubenswrapper[5031]: E0129 09:16:21.028985 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="extract-utilities" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.028994 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="extract-utilities" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.029220 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="6728810f-ca6b-4b34-b89f-ab954ebd9d17" containerName="registry-server" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.031202 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.048177 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bc7zr"] Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.064718 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88nt\" (UniqueName: \"kubernetes.io/projected/3ee12a74-d15c-4706-9f16-f927226fd10a-kube-api-access-g88nt\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.065057 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-utilities\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.065117 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-catalog-content\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.167624 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-utilities\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.167690 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-catalog-content\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.167955 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88nt\" (UniqueName: \"kubernetes.io/projected/3ee12a74-d15c-4706-9f16-f927226fd10a-kube-api-access-g88nt\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.168182 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-utilities\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.168504 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-catalog-content\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.194662 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88nt\" (UniqueName: \"kubernetes.io/projected/3ee12a74-d15c-4706-9f16-f927226fd10a-kube-api-access-g88nt\") pod \"certified-operators-bc7zr\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.282930 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:16:21 crc kubenswrapper[5031]: E0129 09:16:21.283236 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.377429 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:21 crc kubenswrapper[5031]: I0129 09:16:21.928596 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bc7zr"] Jan 29 09:16:22 crc kubenswrapper[5031]: I0129 09:16:22.149416 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerStarted","Data":"4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f"} Jan 29 09:16:22 crc kubenswrapper[5031]: I0129 09:16:22.149847 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerStarted","Data":"6e78cf9e339674545bbc2eca0da56d0d87a1c33b6379596a9d5c7429f152e1a7"} Jan 29 09:16:23 crc kubenswrapper[5031]: I0129 09:16:23.160057 5031 generic.go:334] "Generic (PLEG): container finished" podID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerID="4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f" exitCode=0 Jan 29 09:16:23 crc kubenswrapper[5031]: I0129 09:16:23.160135 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerDied","Data":"4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f"} Jan 29 09:16:24 crc kubenswrapper[5031]: I0129 09:16:24.170366 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerStarted","Data":"0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397"} Jan 29 09:16:25 crc kubenswrapper[5031]: I0129 09:16:25.181634 5031 generic.go:334] "Generic (PLEG): container finished" podID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerID="0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397" exitCode=0 Jan 29 09:16:25 crc kubenswrapper[5031]: I0129 09:16:25.181762 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerDied","Data":"0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397"} Jan 29 09:16:25 crc kubenswrapper[5031]: I0129 09:16:25.182022 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerStarted","Data":"64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a"} Jan 29 09:16:31 crc kubenswrapper[5031]: I0129 09:16:31.377779 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:31 crc kubenswrapper[5031]: I0129 09:16:31.378316 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:31 crc kubenswrapper[5031]: I0129 09:16:31.424581 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:31 crc kubenswrapper[5031]: I0129 09:16:31.448551 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bc7zr" podStartSLOduration=8.92235911 podStartE2EDuration="10.448532239s" podCreationTimestamp="2026-01-29 09:16:21 +0000 UTC" firstStartedPulling="2026-01-29 09:16:23.162095595 +0000 UTC m=+2263.661683537" lastFinishedPulling="2026-01-29 09:16:24.688268714 +0000 UTC m=+2265.187856666" observedRunningTime="2026-01-29 09:16:25.206612134 +0000 UTC m=+2265.706200086" watchObservedRunningTime="2026-01-29 09:16:31.448532239 +0000 UTC m=+2271.948120181" Jan 29 09:16:32 crc kubenswrapper[5031]: I0129 09:16:32.262215 5031 generic.go:334] "Generic (PLEG): container finished" podID="91b928d8-c43f-4fa6-b673-62b42f2c88a1" containerID="1b03b34fc4e78b9ff7fd3d6802d17772e8c838d1539ddda91bcefc750ea8096a" exitCode=0 Jan 29 09:16:32 crc kubenswrapper[5031]: I0129 09:16:32.262282 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" event={"ID":"91b928d8-c43f-4fa6-b673-62b42f2c88a1","Type":"ContainerDied","Data":"1b03b34fc4e78b9ff7fd3d6802d17772e8c838d1539ddda91bcefc750ea8096a"} Jan 29 09:16:32 crc kubenswrapper[5031]: I0129 09:16:32.319630 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:32 crc kubenswrapper[5031]: I0129 09:16:32.369275 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bc7zr"] Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.697806 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.819295 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ssh-key-openstack-edpm-ipam\") pod \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.819678 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-bootstrap-combined-ca-bundle\") pod \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.819774 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ceph\") pod \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.819814 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ptcv\" (UniqueName: \"kubernetes.io/projected/91b928d8-c43f-4fa6-b673-62b42f2c88a1-kube-api-access-4ptcv\") pod \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.819856 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-inventory\") pod \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\" (UID: \"91b928d8-c43f-4fa6-b673-62b42f2c88a1\") " Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.832667 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ceph" (OuterVolumeSpecName: "ceph") pod "91b928d8-c43f-4fa6-b673-62b42f2c88a1" (UID: "91b928d8-c43f-4fa6-b673-62b42f2c88a1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.850836 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "91b928d8-c43f-4fa6-b673-62b42f2c88a1" (UID: "91b928d8-c43f-4fa6-b673-62b42f2c88a1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.851782 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b928d8-c43f-4fa6-b673-62b42f2c88a1-kube-api-access-4ptcv" (OuterVolumeSpecName: "kube-api-access-4ptcv") pod "91b928d8-c43f-4fa6-b673-62b42f2c88a1" (UID: "91b928d8-c43f-4fa6-b673-62b42f2c88a1"). InnerVolumeSpecName "kube-api-access-4ptcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.913647 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-inventory" (OuterVolumeSpecName: "inventory") pod "91b928d8-c43f-4fa6-b673-62b42f2c88a1" (UID: "91b928d8-c43f-4fa6-b673-62b42f2c88a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.922414 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.922743 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ptcv\" (UniqueName: \"kubernetes.io/projected/91b928d8-c43f-4fa6-b673-62b42f2c88a1-kube-api-access-4ptcv\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.922852 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.922927 5031 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:33 crc kubenswrapper[5031]: I0129 09:16:33.926343 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91b928d8-c43f-4fa6-b673-62b42f2c88a1" (UID: "91b928d8-c43f-4fa6-b673-62b42f2c88a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.028192 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b928d8-c43f-4fa6-b673-62b42f2c88a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.281122 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.281112 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd" event={"ID":"91b928d8-c43f-4fa6-b673-62b42f2c88a1","Type":"ContainerDied","Data":"31d545c029f891987ddc7f9f03cf276ec0ac15e0235cc38cdb34bcdc43d29dde"} Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.281182 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d545c029f891987ddc7f9f03cf276ec0ac15e0235cc38cdb34bcdc43d29dde" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.281687 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bc7zr" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="registry-server" containerID="cri-o://64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a" gracePeriod=2 Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.381442 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65"] Jan 29 09:16:34 crc kubenswrapper[5031]: E0129 09:16:34.381888 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b928d8-c43f-4fa6-b673-62b42f2c88a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.381907 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b928d8-c43f-4fa6-b673-62b42f2c88a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.382103 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b928d8-c43f-4fa6-b673-62b42f2c88a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.382762 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.387720 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.387982 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.388120 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.388181 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.389571 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.400940 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65"] Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.435950 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8bb\" (UniqueName: \"kubernetes.io/projected/c9397ed4-a4ea-45be-9115-657795050184-kube-api-access-dz8bb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.436019 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.436107 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.436176 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.538785 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.538943 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.539034 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz8bb\" (UniqueName: \"kubernetes.io/projected/c9397ed4-a4ea-45be-9115-657795050184-kube-api-access-dz8bb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.539069 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.543444 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.543481 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.544484 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.560178 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz8bb\" (UniqueName: \"kubernetes.io/projected/c9397ed4-a4ea-45be-9115-657795050184-kube-api-access-dz8bb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-h9b65\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:34 crc kubenswrapper[5031]: I0129 09:16:34.707434 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.267490 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65"] Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.279090 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.282628 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:16:35 crc kubenswrapper[5031]: E0129 09:16:35.282981 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.296718 5031 generic.go:334] "Generic (PLEG): container finished" podID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerID="64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a" exitCode=0 Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.296819 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerDied","Data":"64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a"} Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.296874 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc7zr" event={"ID":"3ee12a74-d15c-4706-9f16-f927226fd10a","Type":"ContainerDied","Data":"6e78cf9e339674545bbc2eca0da56d0d87a1c33b6379596a9d5c7429f152e1a7"} Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.296901 5031 scope.go:117] "RemoveContainer" containerID="64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.298974 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc7zr" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.331077 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" event={"ID":"c9397ed4-a4ea-45be-9115-657795050184","Type":"ContainerStarted","Data":"11433edaa349a87f9207334e3fa884cab31a0e90040223aba6345fa7c545da02"} Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.358293 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-catalog-content\") pod \"3ee12a74-d15c-4706-9f16-f927226fd10a\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.358409 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g88nt\" (UniqueName: \"kubernetes.io/projected/3ee12a74-d15c-4706-9f16-f927226fd10a-kube-api-access-g88nt\") pod \"3ee12a74-d15c-4706-9f16-f927226fd10a\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.358487 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-utilities\") pod \"3ee12a74-d15c-4706-9f16-f927226fd10a\" (UID: \"3ee12a74-d15c-4706-9f16-f927226fd10a\") " Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.364157 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-utilities" (OuterVolumeSpecName: "utilities") pod "3ee12a74-d15c-4706-9f16-f927226fd10a" (UID: "3ee12a74-d15c-4706-9f16-f927226fd10a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.366838 5031 scope.go:117] "RemoveContainer" containerID="0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.372113 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee12a74-d15c-4706-9f16-f927226fd10a-kube-api-access-g88nt" (OuterVolumeSpecName: "kube-api-access-g88nt") pod "3ee12a74-d15c-4706-9f16-f927226fd10a" (UID: "3ee12a74-d15c-4706-9f16-f927226fd10a"). InnerVolumeSpecName "kube-api-access-g88nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.393718 5031 scope.go:117] "RemoveContainer" containerID="4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.418338 5031 scope.go:117] "RemoveContainer" containerID="64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a" Jan 29 09:16:35 crc kubenswrapper[5031]: E0129 09:16:35.419045 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a\": container with ID starting with 64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a not found: ID does not exist" containerID="64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.419107 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a"} err="failed to get container status \"64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a\": rpc error: code = NotFound desc = could not find container \"64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a\": container with ID starting with 64f5b96345bf139e0b173aaebe98689ec3761da2c9cafaf841b16895e5c19e4a not found: ID does not exist" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.419189 5031 scope.go:117] "RemoveContainer" containerID="0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397" Jan 29 09:16:35 crc kubenswrapper[5031]: E0129 09:16:35.419870 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397\": container with ID starting with 0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397 not found: ID does not exist" containerID="0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.419906 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397"} err="failed to get container status \"0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397\": rpc error: code = NotFound desc = could not find container \"0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397\": container with ID starting with 0b31190457125fb887de7a6c001cac9fbfec71597a089f35974e83fe51438397 not found: ID does not exist" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.419931 5031 scope.go:117] "RemoveContainer" containerID="4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f" Jan 29 09:16:35 crc kubenswrapper[5031]: E0129 09:16:35.420262 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f\": container with ID starting with 4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f not found: ID does not exist" containerID="4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.420301 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f"} err="failed to get container status \"4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f\": rpc error: code = NotFound desc = could not find container \"4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f\": container with ID starting with 4d3858328870dc6f49e904bb7e4d8c53c09273ed35881a3c4762bc0d285dee9f not found: ID does not exist" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.420557 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ee12a74-d15c-4706-9f16-f927226fd10a" (UID: "3ee12a74-d15c-4706-9f16-f927226fd10a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.461483 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.461529 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g88nt\" (UniqueName: \"kubernetes.io/projected/3ee12a74-d15c-4706-9f16-f927226fd10a-kube-api-access-g88nt\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.461549 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee12a74-d15c-4706-9f16-f927226fd10a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.692468 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bc7zr"] Jan 29 09:16:35 crc kubenswrapper[5031]: I0129 09:16:35.700153 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bc7zr"] Jan 29 09:16:36 crc kubenswrapper[5031]: I0129 09:16:36.294025 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" path="/var/lib/kubelet/pods/3ee12a74-d15c-4706-9f16-f927226fd10a/volumes" Jan 29 09:16:36 crc kubenswrapper[5031]: I0129 09:16:36.342511 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" event={"ID":"c9397ed4-a4ea-45be-9115-657795050184","Type":"ContainerStarted","Data":"fc41dbb2dba062a41cf97cd012ea61848adff504ed006218525d0164abe54d12"} Jan 29 09:16:36 crc kubenswrapper[5031]: I0129 09:16:36.368558 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" podStartSLOduration=1.782898329 podStartE2EDuration="2.368537156s" podCreationTimestamp="2026-01-29 09:16:34 +0000 UTC" firstStartedPulling="2026-01-29 09:16:35.27216353 +0000 UTC m=+2275.771751482" lastFinishedPulling="2026-01-29 09:16:35.857802357 +0000 UTC m=+2276.357390309" observedRunningTime="2026-01-29 09:16:36.359558784 +0000 UTC m=+2276.859146746" watchObservedRunningTime="2026-01-29 09:16:36.368537156 +0000 UTC m=+2276.868125108" Jan 29 09:16:46 crc kubenswrapper[5031]: I0129 09:16:46.283134 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:16:46 crc kubenswrapper[5031]: E0129 09:16:46.283826 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:16:57 crc kubenswrapper[5031]: I0129 09:16:57.282905 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:16:57 crc kubenswrapper[5031]: E0129 09:16:57.283884 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:17:01 crc kubenswrapper[5031]: I0129 09:17:01.559764 5031 generic.go:334] "Generic (PLEG): container finished" podID="c9397ed4-a4ea-45be-9115-657795050184" containerID="fc41dbb2dba062a41cf97cd012ea61848adff504ed006218525d0164abe54d12" exitCode=0 Jan 29 09:17:01 crc kubenswrapper[5031]: I0129 09:17:01.560048 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" event={"ID":"c9397ed4-a4ea-45be-9115-657795050184","Type":"ContainerDied","Data":"fc41dbb2dba062a41cf97cd012ea61848adff504ed006218525d0164abe54d12"} Jan 29 09:17:02 crc kubenswrapper[5031]: I0129 09:17:02.979825 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.114774 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ssh-key-openstack-edpm-ipam\") pod \"c9397ed4-a4ea-45be-9115-657795050184\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.115171 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ceph\") pod \"c9397ed4-a4ea-45be-9115-657795050184\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.115192 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-inventory\") pod \"c9397ed4-a4ea-45be-9115-657795050184\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.115312 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz8bb\" (UniqueName: \"kubernetes.io/projected/c9397ed4-a4ea-45be-9115-657795050184-kube-api-access-dz8bb\") pod \"c9397ed4-a4ea-45be-9115-657795050184\" (UID: \"c9397ed4-a4ea-45be-9115-657795050184\") " Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.121376 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9397ed4-a4ea-45be-9115-657795050184-kube-api-access-dz8bb" (OuterVolumeSpecName: "kube-api-access-dz8bb") pod "c9397ed4-a4ea-45be-9115-657795050184" (UID: "c9397ed4-a4ea-45be-9115-657795050184"). InnerVolumeSpecName "kube-api-access-dz8bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.121586 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ceph" (OuterVolumeSpecName: "ceph") pod "c9397ed4-a4ea-45be-9115-657795050184" (UID: "c9397ed4-a4ea-45be-9115-657795050184"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.142526 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9397ed4-a4ea-45be-9115-657795050184" (UID: "c9397ed4-a4ea-45be-9115-657795050184"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.143951 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-inventory" (OuterVolumeSpecName: "inventory") pod "c9397ed4-a4ea-45be-9115-657795050184" (UID: "c9397ed4-a4ea-45be-9115-657795050184"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.217525 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.217570 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.217584 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9397ed4-a4ea-45be-9115-657795050184-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.217596 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz8bb\" (UniqueName: \"kubernetes.io/projected/c9397ed4-a4ea-45be-9115-657795050184-kube-api-access-dz8bb\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.577912 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" event={"ID":"c9397ed4-a4ea-45be-9115-657795050184","Type":"ContainerDied","Data":"11433edaa349a87f9207334e3fa884cab31a0e90040223aba6345fa7c545da02"} Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.578220 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11433edaa349a87f9207334e3fa884cab31a0e90040223aba6345fa7c545da02" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.578001 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-h9b65" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.681125 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8"] Jan 29 09:17:03 crc kubenswrapper[5031]: E0129 09:17:03.681572 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="registry-server" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.681588 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="registry-server" Jan 29 09:17:03 crc kubenswrapper[5031]: E0129 09:17:03.681616 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="extract-content" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.681625 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="extract-content" Jan 29 09:17:03 crc kubenswrapper[5031]: E0129 09:17:03.681639 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="extract-utilities" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.681648 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="extract-utilities" Jan 29 09:17:03 crc kubenswrapper[5031]: E0129 09:17:03.681671 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9397ed4-a4ea-45be-9115-657795050184" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.681680 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9397ed4-a4ea-45be-9115-657795050184" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.681854 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9397ed4-a4ea-45be-9115-657795050184" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.681871 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee12a74-d15c-4706-9f16-f927226fd10a" containerName="registry-server" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.682474 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.684771 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.684963 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.686238 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.686327 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.686668 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.688802 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8"] Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.830211 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzw4k\" (UniqueName: \"kubernetes.io/projected/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-kube-api-access-gzw4k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.830267 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.830323 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.830416 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.932877 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzw4k\" (UniqueName: \"kubernetes.io/projected/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-kube-api-access-gzw4k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.932944 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.933006 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.933035 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.937340 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.937774 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.938880 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:03 crc kubenswrapper[5031]: I0129 09:17:03.950470 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzw4k\" (UniqueName: \"kubernetes.io/projected/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-kube-api-access-gzw4k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:04 crc kubenswrapper[5031]: I0129 09:17:04.000744 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:04 crc kubenswrapper[5031]: I0129 09:17:04.497700 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8"] Jan 29 09:17:04 crc kubenswrapper[5031]: I0129 09:17:04.594319 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" event={"ID":"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41","Type":"ContainerStarted","Data":"02fe26b5fafb481a07d7911d7841dab0ce9ad7d1bcb761616fa367e97efd28ac"} Jan 29 09:17:05 crc kubenswrapper[5031]: I0129 09:17:05.602603 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" event={"ID":"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41","Type":"ContainerStarted","Data":"fcfb064a9575659682dfe694740ad8d6ece7dbaf0ede22e52af295e06ac085c7"} Jan 29 09:17:05 crc kubenswrapper[5031]: I0129 09:17:05.624404 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" podStartSLOduration=2.162382313 podStartE2EDuration="2.624386082s" podCreationTimestamp="2026-01-29 09:17:03 +0000 UTC" firstStartedPulling="2026-01-29 09:17:04.501636582 +0000 UTC m=+2305.001224534" lastFinishedPulling="2026-01-29 09:17:04.963640351 +0000 UTC m=+2305.463228303" observedRunningTime="2026-01-29 09:17:05.620166348 +0000 UTC m=+2306.119754330" watchObservedRunningTime="2026-01-29 09:17:05.624386082 +0000 UTC m=+2306.123974094" Jan 29 09:17:08 crc kubenswrapper[5031]: I0129 09:17:08.282557 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:17:08 crc kubenswrapper[5031]: E0129 09:17:08.284445 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:17:09 crc kubenswrapper[5031]: I0129 09:17:09.484209 5031 scope.go:117] "RemoveContainer" containerID="3b1e0bae10debce8219a80076459d2368e1de546326793626b2eac3d6f24916d" Jan 29 09:17:10 crc kubenswrapper[5031]: I0129 09:17:10.644037 5031 generic.go:334] "Generic (PLEG): container finished" podID="71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" containerID="fcfb064a9575659682dfe694740ad8d6ece7dbaf0ede22e52af295e06ac085c7" exitCode=0 Jan 29 09:17:10 crc kubenswrapper[5031]: I0129 09:17:10.644130 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" event={"ID":"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41","Type":"ContainerDied","Data":"fcfb064a9575659682dfe694740ad8d6ece7dbaf0ede22e52af295e06ac085c7"} Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.109219 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.204425 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ceph\") pod \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.204633 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzw4k\" (UniqueName: \"kubernetes.io/projected/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-kube-api-access-gzw4k\") pod \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.205917 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-inventory\") pod \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.206027 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ssh-key-openstack-edpm-ipam\") pod \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\" (UID: \"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41\") " Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.219508 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ceph" (OuterVolumeSpecName: "ceph") pod "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" (UID: "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.219744 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-kube-api-access-gzw4k" (OuterVolumeSpecName: "kube-api-access-gzw4k") pod "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" (UID: "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41"). InnerVolumeSpecName "kube-api-access-gzw4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.234670 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-inventory" (OuterVolumeSpecName: "inventory") pod "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" (UID: "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.234628 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" (UID: "71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.308790 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.308839 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.308850 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzw4k\" (UniqueName: \"kubernetes.io/projected/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-kube-api-access-gzw4k\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.308864 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.665195 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" event={"ID":"71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41","Type":"ContainerDied","Data":"02fe26b5fafb481a07d7911d7841dab0ce9ad7d1bcb761616fa367e97efd28ac"} Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.665243 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02fe26b5fafb481a07d7911d7841dab0ce9ad7d1bcb761616fa367e97efd28ac" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.665585 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.748802 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282"] Jan 29 09:17:12 crc kubenswrapper[5031]: E0129 09:17:12.749735 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.749818 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.750062 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.750700 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.756930 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.756986 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.757105 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.757175 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.757226 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.764722 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282"] Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.933091 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjhx\" (UniqueName: \"kubernetes.io/projected/83ca1366-5060-4771-ae03-b06595c0d5fb-kube-api-access-mxjhx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.933500 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.933682 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:12 crc kubenswrapper[5031]: I0129 09:17:12.933861 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.036270 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxjhx\" (UniqueName: \"kubernetes.io/projected/83ca1366-5060-4771-ae03-b06595c0d5fb-kube-api-access-mxjhx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.036337 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.038452 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.038543 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.046238 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.053430 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.054026 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.056417 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxjhx\" (UniqueName: \"kubernetes.io/projected/83ca1366-5060-4771-ae03-b06595c0d5fb-kube-api-access-mxjhx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tc282\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.072877 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.650529 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282"] Jan 29 09:17:13 crc kubenswrapper[5031]: I0129 09:17:13.675329 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" event={"ID":"83ca1366-5060-4771-ae03-b06595c0d5fb","Type":"ContainerStarted","Data":"8c6bc4b82646f262650b8a0c9b38c9ba7aa3ed5e2c153617942d8feb7b358b4d"} Jan 29 09:17:14 crc kubenswrapper[5031]: I0129 09:17:14.685615 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" event={"ID":"83ca1366-5060-4771-ae03-b06595c0d5fb","Type":"ContainerStarted","Data":"4b399b3742a9a0a38ebb19a7cb42862a7470c9c55553805e525e32ee61b24421"} Jan 29 09:17:14 crc kubenswrapper[5031]: I0129 09:17:14.707210 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" podStartSLOduration=2.267833889 podStartE2EDuration="2.707190505s" podCreationTimestamp="2026-01-29 09:17:12 +0000 UTC" firstStartedPulling="2026-01-29 09:17:13.655948751 +0000 UTC m=+2314.155536713" lastFinishedPulling="2026-01-29 09:17:14.095305377 +0000 UTC m=+2314.594893329" observedRunningTime="2026-01-29 09:17:14.701426799 +0000 UTC m=+2315.201014771" watchObservedRunningTime="2026-01-29 09:17:14.707190505 +0000 UTC m=+2315.206778457" Jan 29 09:17:22 crc kubenswrapper[5031]: I0129 09:17:22.282252 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:17:22 crc kubenswrapper[5031]: E0129 09:17:22.282950 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:17:34 crc kubenswrapper[5031]: I0129 09:17:34.283471 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:17:34 crc kubenswrapper[5031]: E0129 09:17:34.284243 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:17:48 crc kubenswrapper[5031]: I0129 09:17:48.964280 5031 generic.go:334] "Generic (PLEG): container finished" podID="83ca1366-5060-4771-ae03-b06595c0d5fb" containerID="4b399b3742a9a0a38ebb19a7cb42862a7470c9c55553805e525e32ee61b24421" exitCode=0 Jan 29 09:17:48 crc kubenswrapper[5031]: I0129 09:17:48.964333 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" event={"ID":"83ca1366-5060-4771-ae03-b06595c0d5fb","Type":"ContainerDied","Data":"4b399b3742a9a0a38ebb19a7cb42862a7470c9c55553805e525e32ee61b24421"} Jan 29 09:17:49 crc kubenswrapper[5031]: I0129 09:17:49.282903 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:17:49 crc kubenswrapper[5031]: E0129 09:17:49.283253 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.380476 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.576994 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-inventory\") pod \"83ca1366-5060-4771-ae03-b06595c0d5fb\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.577103 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ceph\") pod \"83ca1366-5060-4771-ae03-b06595c0d5fb\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.577134 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ssh-key-openstack-edpm-ipam\") pod \"83ca1366-5060-4771-ae03-b06595c0d5fb\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.577407 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxjhx\" (UniqueName: \"kubernetes.io/projected/83ca1366-5060-4771-ae03-b06595c0d5fb-kube-api-access-mxjhx\") pod \"83ca1366-5060-4771-ae03-b06595c0d5fb\" (UID: \"83ca1366-5060-4771-ae03-b06595c0d5fb\") " Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.583651 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ca1366-5060-4771-ae03-b06595c0d5fb-kube-api-access-mxjhx" (OuterVolumeSpecName: "kube-api-access-mxjhx") pod "83ca1366-5060-4771-ae03-b06595c0d5fb" (UID: "83ca1366-5060-4771-ae03-b06595c0d5fb"). InnerVolumeSpecName "kube-api-access-mxjhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.585683 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ceph" (OuterVolumeSpecName: "ceph") pod "83ca1366-5060-4771-ae03-b06595c0d5fb" (UID: "83ca1366-5060-4771-ae03-b06595c0d5fb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.607997 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83ca1366-5060-4771-ae03-b06595c0d5fb" (UID: "83ca1366-5060-4771-ae03-b06595c0d5fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.608386 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-inventory" (OuterVolumeSpecName: "inventory") pod "83ca1366-5060-4771-ae03-b06595c0d5fb" (UID: "83ca1366-5060-4771-ae03-b06595c0d5fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.680253 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.680306 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.680320 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83ca1366-5060-4771-ae03-b06595c0d5fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.680335 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxjhx\" (UniqueName: \"kubernetes.io/projected/83ca1366-5060-4771-ae03-b06595c0d5fb-kube-api-access-mxjhx\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.983105 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" event={"ID":"83ca1366-5060-4771-ae03-b06595c0d5fb","Type":"ContainerDied","Data":"8c6bc4b82646f262650b8a0c9b38c9ba7aa3ed5e2c153617942d8feb7b358b4d"} Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.983480 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c6bc4b82646f262650b8a0c9b38c9ba7aa3ed5e2c153617942d8feb7b358b4d" Jan 29 09:17:50 crc kubenswrapper[5031]: I0129 09:17:50.983204 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tc282" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.071683 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v"] Jan 29 09:17:51 crc kubenswrapper[5031]: E0129 09:17:51.072345 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ca1366-5060-4771-ae03-b06595c0d5fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.072450 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ca1366-5060-4771-ae03-b06595c0d5fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.072791 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ca1366-5060-4771-ae03-b06595c0d5fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.082667 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v"] Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.082846 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.092701 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.093020 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.093076 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.093012 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.093310 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.195686 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.195754 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr8np\" (UniqueName: \"kubernetes.io/projected/fc3178c8-27cc-4f8e-a913-6eae9c84da49-kube-api-access-hr8np\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.195808 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.195838 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.297615 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.297979 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.298392 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.298548 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr8np\" (UniqueName: \"kubernetes.io/projected/fc3178c8-27cc-4f8e-a913-6eae9c84da49-kube-api-access-hr8np\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.302145 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.302245 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.307124 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.318096 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr8np\" (UniqueName: \"kubernetes.io/projected/fc3178c8-27cc-4f8e-a913-6eae9c84da49-kube-api-access-hr8np\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.408990 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.973326 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v"] Jan 29 09:17:51 crc kubenswrapper[5031]: I0129 09:17:51.993354 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" event={"ID":"fc3178c8-27cc-4f8e-a913-6eae9c84da49","Type":"ContainerStarted","Data":"e38fb78370a833d9649fb05d21866c6d394d36fa4ec12c0d6530ff6c7fbcf73c"} Jan 29 09:17:53 crc kubenswrapper[5031]: I0129 09:17:53.002448 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" event={"ID":"fc3178c8-27cc-4f8e-a913-6eae9c84da49","Type":"ContainerStarted","Data":"89578a555291d5d311e8135b905f5591342dbcdd26e6245e8be59c3608cc6af4"} Jan 29 09:17:53 crc kubenswrapper[5031]: I0129 09:17:53.023730 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" podStartSLOduration=1.5349902100000001 podStartE2EDuration="2.023707763s" podCreationTimestamp="2026-01-29 09:17:51 +0000 UTC" firstStartedPulling="2026-01-29 09:17:51.979135139 +0000 UTC m=+2352.478723091" lastFinishedPulling="2026-01-29 09:17:52.467852692 +0000 UTC m=+2352.967440644" observedRunningTime="2026-01-29 09:17:53.019452168 +0000 UTC m=+2353.519040130" watchObservedRunningTime="2026-01-29 09:17:53.023707763 +0000 UTC m=+2353.523295715" Jan 29 09:17:57 crc kubenswrapper[5031]: I0129 09:17:57.036233 5031 generic.go:334] "Generic (PLEG): container finished" podID="fc3178c8-27cc-4f8e-a913-6eae9c84da49" containerID="89578a555291d5d311e8135b905f5591342dbcdd26e6245e8be59c3608cc6af4" exitCode=0 Jan 29 09:17:57 crc kubenswrapper[5031]: I0129 09:17:57.036307 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" event={"ID":"fc3178c8-27cc-4f8e-a913-6eae9c84da49","Type":"ContainerDied","Data":"89578a555291d5d311e8135b905f5591342dbcdd26e6245e8be59c3608cc6af4"} Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.449191 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.553088 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr8np\" (UniqueName: \"kubernetes.io/projected/fc3178c8-27cc-4f8e-a913-6eae9c84da49-kube-api-access-hr8np\") pod \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.553481 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ceph\") pod \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.553537 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-inventory\") pod \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.553588 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam\") pod \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.560226 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3178c8-27cc-4f8e-a913-6eae9c84da49-kube-api-access-hr8np" (OuterVolumeSpecName: "kube-api-access-hr8np") pod "fc3178c8-27cc-4f8e-a913-6eae9c84da49" (UID: "fc3178c8-27cc-4f8e-a913-6eae9c84da49"). InnerVolumeSpecName "kube-api-access-hr8np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.561679 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ceph" (OuterVolumeSpecName: "ceph") pod "fc3178c8-27cc-4f8e-a913-6eae9c84da49" (UID: "fc3178c8-27cc-4f8e-a913-6eae9c84da49"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:58 crc kubenswrapper[5031]: E0129 09:17:58.581957 5031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam podName:fc3178c8-27cc-4f8e-a913-6eae9c84da49 nodeName:}" failed. No retries permitted until 2026-01-29 09:17:59.08188569 +0000 UTC m=+2359.581473642 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam") pod "fc3178c8-27cc-4f8e-a913-6eae9c84da49" (UID: "fc3178c8-27cc-4f8e-a913-6eae9c84da49") : error deleting /var/lib/kubelet/pods/fc3178c8-27cc-4f8e-a913-6eae9c84da49/volume-subpaths: remove /var/lib/kubelet/pods/fc3178c8-27cc-4f8e-a913-6eae9c84da49/volume-subpaths: no such file or directory Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.584566 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-inventory" (OuterVolumeSpecName: "inventory") pod "fc3178c8-27cc-4f8e-a913-6eae9c84da49" (UID: "fc3178c8-27cc-4f8e-a913-6eae9c84da49"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.657567 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr8np\" (UniqueName: \"kubernetes.io/projected/fc3178c8-27cc-4f8e-a913-6eae9c84da49-kube-api-access-hr8np\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.657623 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:58 crc kubenswrapper[5031]: I0129 09:17:58.657638 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.054778 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" event={"ID":"fc3178c8-27cc-4f8e-a913-6eae9c84da49","Type":"ContainerDied","Data":"e38fb78370a833d9649fb05d21866c6d394d36fa4ec12c0d6530ff6c7fbcf73c"} Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.054835 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e38fb78370a833d9649fb05d21866c6d394d36fa4ec12c0d6530ff6c7fbcf73c" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.054891 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.146166 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7"] Jan 29 09:17:59 crc kubenswrapper[5031]: E0129 09:17:59.146999 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3178c8-27cc-4f8e-a913-6eae9c84da49" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.147033 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3178c8-27cc-4f8e-a913-6eae9c84da49" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.147214 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3178c8-27cc-4f8e-a913-6eae9c84da49" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.148152 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.169650 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam\") pod \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\" (UID: \"fc3178c8-27cc-4f8e-a913-6eae9c84da49\") " Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.173263 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc3178c8-27cc-4f8e-a913-6eae9c84da49" (UID: "fc3178c8-27cc-4f8e-a913-6eae9c84da49"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.176128 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7"] Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.271806 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.271922 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.271954 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.271972 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f5jc\" (UniqueName: \"kubernetes.io/projected/1c21c7ac-919e-43f0-92b2-0cf64df94743-kube-api-access-7f5jc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.272042 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc3178c8-27cc-4f8e-a913-6eae9c84da49-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.373239 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.373296 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f5jc\" (UniqueName: \"kubernetes.io/projected/1c21c7ac-919e-43f0-92b2-0cf64df94743-kube-api-access-7f5jc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.373468 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.373603 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.378227 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.387868 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.388178 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.392696 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f5jc\" (UniqueName: \"kubernetes.io/projected/1c21c7ac-919e-43f0-92b2-0cf64df94743-kube-api-access-7f5jc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:17:59 crc kubenswrapper[5031]: I0129 09:17:59.476256 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:18:00 crc kubenswrapper[5031]: I0129 09:18:00.026413 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7"] Jan 29 09:18:00 crc kubenswrapper[5031]: I0129 09:18:00.063744 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" event={"ID":"1c21c7ac-919e-43f0-92b2-0cf64df94743","Type":"ContainerStarted","Data":"f6b7b1a003c7a38a1d60dc80cb42c307693506856a442f7a0cdc40cfdae64118"} Jan 29 09:18:01 crc kubenswrapper[5031]: I0129 09:18:01.073239 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" event={"ID":"1c21c7ac-919e-43f0-92b2-0cf64df94743","Type":"ContainerStarted","Data":"4ac17e7debd3a1e4e91d930eb7eadf68bdfefa2e4bac0fcd100e2b39260b35c0"} Jan 29 09:18:01 crc kubenswrapper[5031]: I0129 09:18:01.093316 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" podStartSLOduration=1.684690008 podStartE2EDuration="2.093297172s" podCreationTimestamp="2026-01-29 09:17:59 +0000 UTC" firstStartedPulling="2026-01-29 09:18:00.043455515 +0000 UTC m=+2360.543043477" lastFinishedPulling="2026-01-29 09:18:00.452062689 +0000 UTC m=+2360.951650641" observedRunningTime="2026-01-29 09:18:01.091892264 +0000 UTC m=+2361.591480216" watchObservedRunningTime="2026-01-29 09:18:01.093297172 +0000 UTC m=+2361.592885124" Jan 29 09:18:01 crc kubenswrapper[5031]: I0129 09:18:01.283114 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:18:01 crc kubenswrapper[5031]: E0129 09:18:01.284129 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:18:13 crc kubenswrapper[5031]: I0129 09:18:13.283024 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:18:13 crc kubenswrapper[5031]: E0129 09:18:13.283854 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:18:24 crc kubenswrapper[5031]: I0129 09:18:24.282841 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:18:24 crc kubenswrapper[5031]: E0129 09:18:24.283802 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:18:36 crc kubenswrapper[5031]: I0129 09:18:36.282815 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:18:36 crc kubenswrapper[5031]: E0129 09:18:36.283506 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:18:38 crc kubenswrapper[5031]: I0129 09:18:38.361071 5031 generic.go:334] "Generic (PLEG): container finished" podID="1c21c7ac-919e-43f0-92b2-0cf64df94743" containerID="4ac17e7debd3a1e4e91d930eb7eadf68bdfefa2e4bac0fcd100e2b39260b35c0" exitCode=0 Jan 29 09:18:38 crc kubenswrapper[5031]: I0129 09:18:38.361144 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" event={"ID":"1c21c7ac-919e-43f0-92b2-0cf64df94743","Type":"ContainerDied","Data":"4ac17e7debd3a1e4e91d930eb7eadf68bdfefa2e4bac0fcd100e2b39260b35c0"} Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.745971 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.883472 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ssh-key-openstack-edpm-ipam\") pod \"1c21c7ac-919e-43f0-92b2-0cf64df94743\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.883621 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ceph\") pod \"1c21c7ac-919e-43f0-92b2-0cf64df94743\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.883664 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f5jc\" (UniqueName: \"kubernetes.io/projected/1c21c7ac-919e-43f0-92b2-0cf64df94743-kube-api-access-7f5jc\") pod \"1c21c7ac-919e-43f0-92b2-0cf64df94743\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.884595 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-inventory\") pod \"1c21c7ac-919e-43f0-92b2-0cf64df94743\" (UID: \"1c21c7ac-919e-43f0-92b2-0cf64df94743\") " Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.890355 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ceph" (OuterVolumeSpecName: "ceph") pod "1c21c7ac-919e-43f0-92b2-0cf64df94743" (UID: "1c21c7ac-919e-43f0-92b2-0cf64df94743"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.890508 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c21c7ac-919e-43f0-92b2-0cf64df94743-kube-api-access-7f5jc" (OuterVolumeSpecName: "kube-api-access-7f5jc") pod "1c21c7ac-919e-43f0-92b2-0cf64df94743" (UID: "1c21c7ac-919e-43f0-92b2-0cf64df94743"). InnerVolumeSpecName "kube-api-access-7f5jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.911323 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c21c7ac-919e-43f0-92b2-0cf64df94743" (UID: "1c21c7ac-919e-43f0-92b2-0cf64df94743"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.911821 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-inventory" (OuterVolumeSpecName: "inventory") pod "1c21c7ac-919e-43f0-92b2-0cf64df94743" (UID: "1c21c7ac-919e-43f0-92b2-0cf64df94743"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.987546 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.987592 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f5jc\" (UniqueName: \"kubernetes.io/projected/1c21c7ac-919e-43f0-92b2-0cf64df94743-kube-api-access-7f5jc\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.987605 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:39 crc kubenswrapper[5031]: I0129 09:18:39.987615 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c21c7ac-919e-43f0-92b2-0cf64df94743-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.378690 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" event={"ID":"1c21c7ac-919e-43f0-92b2-0cf64df94743","Type":"ContainerDied","Data":"f6b7b1a003c7a38a1d60dc80cb42c307693506856a442f7a0cdc40cfdae64118"} Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.378731 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b7b1a003c7a38a1d60dc80cb42c307693506856a442f7a0cdc40cfdae64118" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.378784 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.467910 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6t4cp"] Jan 29 09:18:40 crc kubenswrapper[5031]: E0129 09:18:40.468345 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c21c7ac-919e-43f0-92b2-0cf64df94743" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.468376 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c21c7ac-919e-43f0-92b2-0cf64df94743" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.468533 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c21c7ac-919e-43f0-92b2-0cf64df94743" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.469126 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.472188 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.472495 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.472664 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.472809 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.472927 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.487808 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6t4cp"] Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.598544 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.598889 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjcqt\" (UniqueName: \"kubernetes.io/projected/8c91cd46-761e-4015-a2ea-90647c5a7be5-kube-api-access-gjcqt\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.599036 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.599110 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ceph\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.701157 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.701259 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ceph\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.701325 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.701360 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjcqt\" (UniqueName: \"kubernetes.io/projected/8c91cd46-761e-4015-a2ea-90647c5a7be5-kube-api-access-gjcqt\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.709494 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.709636 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.709697 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ceph\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.719649 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjcqt\" (UniqueName: \"kubernetes.io/projected/8c91cd46-761e-4015-a2ea-90647c5a7be5-kube-api-access-gjcqt\") pod \"ssh-known-hosts-edpm-deployment-6t4cp\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:40 crc kubenswrapper[5031]: I0129 09:18:40.786657 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:41 crc kubenswrapper[5031]: I0129 09:18:41.329693 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6t4cp"] Jan 29 09:18:41 crc kubenswrapper[5031]: I0129 09:18:41.404982 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" event={"ID":"8c91cd46-761e-4015-a2ea-90647c5a7be5","Type":"ContainerStarted","Data":"9d3c762ed91f3af57d315498d96cee0825c0ca6826c4cc296e973f3771be3d47"} Jan 29 09:18:42 crc kubenswrapper[5031]: I0129 09:18:42.416188 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" event={"ID":"8c91cd46-761e-4015-a2ea-90647c5a7be5","Type":"ContainerStarted","Data":"c26525f88abbd2a4d4076381ffd32b096880e51d546580878d31388a1194c84e"} Jan 29 09:18:42 crc kubenswrapper[5031]: I0129 09:18:42.433634 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" podStartSLOduration=1.900624495 podStartE2EDuration="2.433617036s" podCreationTimestamp="2026-01-29 09:18:40 +0000 UTC" firstStartedPulling="2026-01-29 09:18:41.335448223 +0000 UTC m=+2401.835036165" lastFinishedPulling="2026-01-29 09:18:41.868440754 +0000 UTC m=+2402.368028706" observedRunningTime="2026-01-29 09:18:42.43300635 +0000 UTC m=+2402.932594312" watchObservedRunningTime="2026-01-29 09:18:42.433617036 +0000 UTC m=+2402.933204988" Jan 29 09:18:47 crc kubenswrapper[5031]: I0129 09:18:47.282311 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:18:47 crc kubenswrapper[5031]: E0129 09:18:47.283078 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:18:50 crc kubenswrapper[5031]: I0129 09:18:50.478785 5031 generic.go:334] "Generic (PLEG): container finished" podID="8c91cd46-761e-4015-a2ea-90647c5a7be5" containerID="c26525f88abbd2a4d4076381ffd32b096880e51d546580878d31388a1194c84e" exitCode=0 Jan 29 09:18:50 crc kubenswrapper[5031]: I0129 09:18:50.478823 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" event={"ID":"8c91cd46-761e-4015-a2ea-90647c5a7be5","Type":"ContainerDied","Data":"c26525f88abbd2a4d4076381ffd32b096880e51d546580878d31388a1194c84e"} Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.890517 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.945403 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-inventory-0\") pod \"8c91cd46-761e-4015-a2ea-90647c5a7be5\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.945477 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ssh-key-openstack-edpm-ipam\") pod \"8c91cd46-761e-4015-a2ea-90647c5a7be5\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.945573 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ceph\") pod \"8c91cd46-761e-4015-a2ea-90647c5a7be5\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.945712 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjcqt\" (UniqueName: \"kubernetes.io/projected/8c91cd46-761e-4015-a2ea-90647c5a7be5-kube-api-access-gjcqt\") pod \"8c91cd46-761e-4015-a2ea-90647c5a7be5\" (UID: \"8c91cd46-761e-4015-a2ea-90647c5a7be5\") " Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.953545 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ceph" (OuterVolumeSpecName: "ceph") pod "8c91cd46-761e-4015-a2ea-90647c5a7be5" (UID: "8c91cd46-761e-4015-a2ea-90647c5a7be5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.953784 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c91cd46-761e-4015-a2ea-90647c5a7be5-kube-api-access-gjcqt" (OuterVolumeSpecName: "kube-api-access-gjcqt") pod "8c91cd46-761e-4015-a2ea-90647c5a7be5" (UID: "8c91cd46-761e-4015-a2ea-90647c5a7be5"). InnerVolumeSpecName "kube-api-access-gjcqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.972593 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c91cd46-761e-4015-a2ea-90647c5a7be5" (UID: "8c91cd46-761e-4015-a2ea-90647c5a7be5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:18:51 crc kubenswrapper[5031]: I0129 09:18:51.975500 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8c91cd46-761e-4015-a2ea-90647c5a7be5" (UID: "8c91cd46-761e-4015-a2ea-90647c5a7be5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.048901 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjcqt\" (UniqueName: \"kubernetes.io/projected/8c91cd46-761e-4015-a2ea-90647c5a7be5-kube-api-access-gjcqt\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.049324 5031 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.049412 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.049503 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c91cd46-761e-4015-a2ea-90647c5a7be5-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.497856 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" event={"ID":"8c91cd46-761e-4015-a2ea-90647c5a7be5","Type":"ContainerDied","Data":"9d3c762ed91f3af57d315498d96cee0825c0ca6826c4cc296e973f3771be3d47"} Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.497919 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3c762ed91f3af57d315498d96cee0825c0ca6826c4cc296e973f3771be3d47" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.497986 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6t4cp" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.572645 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc"] Jan 29 09:18:52 crc kubenswrapper[5031]: E0129 09:18:52.573019 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c91cd46-761e-4015-a2ea-90647c5a7be5" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.573036 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c91cd46-761e-4015-a2ea-90647c5a7be5" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.573224 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c91cd46-761e-4015-a2ea-90647c5a7be5" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.573895 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.576088 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.576640 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.576662 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.576700 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.576911 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.591672 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc"] Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.660059 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.660115 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69p5\" (UniqueName: \"kubernetes.io/projected/7a27e64c-0c6a-497f-bdae-50302a72b898-kube-api-access-d69p5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.660152 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.660220 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.762648 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.762742 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d69p5\" (UniqueName: \"kubernetes.io/projected/7a27e64c-0c6a-497f-bdae-50302a72b898-kube-api-access-d69p5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.762789 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.762820 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.767808 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.771024 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.774043 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.782105 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d69p5\" (UniqueName: \"kubernetes.io/projected/7a27e64c-0c6a-497f-bdae-50302a72b898-kube-api-access-d69p5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pppc\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:52 crc kubenswrapper[5031]: I0129 09:18:52.900265 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:18:53 crc kubenswrapper[5031]: I0129 09:18:53.414060 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc"] Jan 29 09:18:53 crc kubenswrapper[5031]: I0129 09:18:53.507640 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" event={"ID":"7a27e64c-0c6a-497f-bdae-50302a72b898","Type":"ContainerStarted","Data":"b452e25497d9bad2ffc66223f509371ea39686ae2d321313b07f9a6ead025353"} Jan 29 09:18:54 crc kubenswrapper[5031]: I0129 09:18:54.519953 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" event={"ID":"7a27e64c-0c6a-497f-bdae-50302a72b898","Type":"ContainerStarted","Data":"5836bf1b580bfd3e3a17e9885ff4ef3ce704c80bb2567315808011d1c9d0791c"} Jan 29 09:18:54 crc kubenswrapper[5031]: I0129 09:18:54.539508 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" podStartSLOduration=2.120189457 podStartE2EDuration="2.539483722s" podCreationTimestamp="2026-01-29 09:18:52 +0000 UTC" firstStartedPulling="2026-01-29 09:18:53.411836379 +0000 UTC m=+2413.911424331" lastFinishedPulling="2026-01-29 09:18:53.831130644 +0000 UTC m=+2414.330718596" observedRunningTime="2026-01-29 09:18:54.538042544 +0000 UTC m=+2415.037630496" watchObservedRunningTime="2026-01-29 09:18:54.539483722 +0000 UTC m=+2415.039071684" Jan 29 09:19:01 crc kubenswrapper[5031]: I0129 09:19:01.585989 5031 generic.go:334] "Generic (PLEG): container finished" podID="7a27e64c-0c6a-497f-bdae-50302a72b898" containerID="5836bf1b580bfd3e3a17e9885ff4ef3ce704c80bb2567315808011d1c9d0791c" exitCode=0 Jan 29 09:19:01 crc kubenswrapper[5031]: I0129 09:19:01.586070 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" event={"ID":"7a27e64c-0c6a-497f-bdae-50302a72b898","Type":"ContainerDied","Data":"5836bf1b580bfd3e3a17e9885ff4ef3ce704c80bb2567315808011d1c9d0791c"} Jan 29 09:19:02 crc kubenswrapper[5031]: I0129 09:19:02.282562 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:19:02 crc kubenswrapper[5031]: E0129 09:19:02.282942 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:19:02 crc kubenswrapper[5031]: I0129 09:19:02.996417 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.071931 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ssh-key-openstack-edpm-ipam\") pod \"7a27e64c-0c6a-497f-bdae-50302a72b898\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.072066 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ceph\") pod \"7a27e64c-0c6a-497f-bdae-50302a72b898\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.072123 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d69p5\" (UniqueName: \"kubernetes.io/projected/7a27e64c-0c6a-497f-bdae-50302a72b898-kube-api-access-d69p5\") pod \"7a27e64c-0c6a-497f-bdae-50302a72b898\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.072856 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-inventory\") pod \"7a27e64c-0c6a-497f-bdae-50302a72b898\" (UID: \"7a27e64c-0c6a-497f-bdae-50302a72b898\") " Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.078322 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ceph" (OuterVolumeSpecName: "ceph") pod "7a27e64c-0c6a-497f-bdae-50302a72b898" (UID: "7a27e64c-0c6a-497f-bdae-50302a72b898"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.078415 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a27e64c-0c6a-497f-bdae-50302a72b898-kube-api-access-d69p5" (OuterVolumeSpecName: "kube-api-access-d69p5") pod "7a27e64c-0c6a-497f-bdae-50302a72b898" (UID: "7a27e64c-0c6a-497f-bdae-50302a72b898"). InnerVolumeSpecName "kube-api-access-d69p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.096685 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-inventory" (OuterVolumeSpecName: "inventory") pod "7a27e64c-0c6a-497f-bdae-50302a72b898" (UID: "7a27e64c-0c6a-497f-bdae-50302a72b898"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.105470 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a27e64c-0c6a-497f-bdae-50302a72b898" (UID: "7a27e64c-0c6a-497f-bdae-50302a72b898"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.175146 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.175509 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.175522 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7a27e64c-0c6a-497f-bdae-50302a72b898-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.175532 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d69p5\" (UniqueName: \"kubernetes.io/projected/7a27e64c-0c6a-497f-bdae-50302a72b898-kube-api-access-d69p5\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.606410 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" event={"ID":"7a27e64c-0c6a-497f-bdae-50302a72b898","Type":"ContainerDied","Data":"b452e25497d9bad2ffc66223f509371ea39686ae2d321313b07f9a6ead025353"} Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.606462 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pppc" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.606469 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b452e25497d9bad2ffc66223f509371ea39686ae2d321313b07f9a6ead025353" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.697843 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77"] Jan 29 09:19:03 crc kubenswrapper[5031]: E0129 09:19:03.698428 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a27e64c-0c6a-497f-bdae-50302a72b898" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.698465 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a27e64c-0c6a-497f-bdae-50302a72b898" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.698754 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a27e64c-0c6a-497f-bdae-50302a72b898" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.699691 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.703633 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.703677 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.706818 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77"] Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.707153 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.707484 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.707736 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.788092 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.788160 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.788275 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt77\" (UniqueName: \"kubernetes.io/projected/5a33f933-f687-47f9-868b-02c0a633ab0f-kube-api-access-xbt77\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.788348 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.890338 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.890416 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.890534 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt77\" (UniqueName: \"kubernetes.io/projected/5a33f933-f687-47f9-868b-02c0a633ab0f-kube-api-access-xbt77\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.890582 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.895153 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.895619 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.895822 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:03 crc kubenswrapper[5031]: I0129 09:19:03.911656 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt77\" (UniqueName: \"kubernetes.io/projected/5a33f933-f687-47f9-868b-02c0a633ab0f-kube-api-access-xbt77\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:04 crc kubenswrapper[5031]: I0129 09:19:04.028609 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:04 crc kubenswrapper[5031]: I0129 09:19:04.602298 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77"] Jan 29 09:19:04 crc kubenswrapper[5031]: I0129 09:19:04.619851 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" event={"ID":"5a33f933-f687-47f9-868b-02c0a633ab0f","Type":"ContainerStarted","Data":"24f5fe6960dd3e3d753aa28c2de9a3dc5671ad61418fe85900c0964fbf0ad391"} Jan 29 09:19:05 crc kubenswrapper[5031]: I0129 09:19:05.629802 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" event={"ID":"5a33f933-f687-47f9-868b-02c0a633ab0f","Type":"ContainerStarted","Data":"f1a5f5ea975d46f9f36e0d53c6a356192f8505e9ba621cdd39504f99eb781c77"} Jan 29 09:19:05 crc kubenswrapper[5031]: I0129 09:19:05.661749 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" podStartSLOduration=2.177564781 podStartE2EDuration="2.661725524s" podCreationTimestamp="2026-01-29 09:19:03 +0000 UTC" firstStartedPulling="2026-01-29 09:19:04.608601417 +0000 UTC m=+2425.108189369" lastFinishedPulling="2026-01-29 09:19:05.09276216 +0000 UTC m=+2425.592350112" observedRunningTime="2026-01-29 09:19:05.649491996 +0000 UTC m=+2426.149079958" watchObservedRunningTime="2026-01-29 09:19:05.661725524 +0000 UTC m=+2426.161313496" Jan 29 09:19:14 crc kubenswrapper[5031]: I0129 09:19:14.282276 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:19:14 crc kubenswrapper[5031]: E0129 09:19:14.283174 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:19:14 crc kubenswrapper[5031]: I0129 09:19:14.705525 5031 generic.go:334] "Generic (PLEG): container finished" podID="5a33f933-f687-47f9-868b-02c0a633ab0f" containerID="f1a5f5ea975d46f9f36e0d53c6a356192f8505e9ba621cdd39504f99eb781c77" exitCode=0 Jan 29 09:19:14 crc kubenswrapper[5031]: I0129 09:19:14.705568 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" event={"ID":"5a33f933-f687-47f9-868b-02c0a633ab0f","Type":"ContainerDied","Data":"f1a5f5ea975d46f9f36e0d53c6a356192f8505e9ba621cdd39504f99eb781c77"} Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.172745 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.232202 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ceph\") pod \"5a33f933-f687-47f9-868b-02c0a633ab0f\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.232298 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ssh-key-openstack-edpm-ipam\") pod \"5a33f933-f687-47f9-868b-02c0a633ab0f\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.232347 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-inventory\") pod \"5a33f933-f687-47f9-868b-02c0a633ab0f\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.232397 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbt77\" (UniqueName: \"kubernetes.io/projected/5a33f933-f687-47f9-868b-02c0a633ab0f-kube-api-access-xbt77\") pod \"5a33f933-f687-47f9-868b-02c0a633ab0f\" (UID: \"5a33f933-f687-47f9-868b-02c0a633ab0f\") " Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.237863 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ceph" (OuterVolumeSpecName: "ceph") pod "5a33f933-f687-47f9-868b-02c0a633ab0f" (UID: "5a33f933-f687-47f9-868b-02c0a633ab0f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.240131 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a33f933-f687-47f9-868b-02c0a633ab0f-kube-api-access-xbt77" (OuterVolumeSpecName: "kube-api-access-xbt77") pod "5a33f933-f687-47f9-868b-02c0a633ab0f" (UID: "5a33f933-f687-47f9-868b-02c0a633ab0f"). InnerVolumeSpecName "kube-api-access-xbt77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.260412 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-inventory" (OuterVolumeSpecName: "inventory") pod "5a33f933-f687-47f9-868b-02c0a633ab0f" (UID: "5a33f933-f687-47f9-868b-02c0a633ab0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.262554 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a33f933-f687-47f9-868b-02c0a633ab0f" (UID: "5a33f933-f687-47f9-868b-02c0a633ab0f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.335029 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbt77\" (UniqueName: \"kubernetes.io/projected/5a33f933-f687-47f9-868b-02c0a633ab0f-kube-api-access-xbt77\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.335284 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.335342 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.335470 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a33f933-f687-47f9-868b-02c0a633ab0f-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.722643 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" event={"ID":"5a33f933-f687-47f9-868b-02c0a633ab0f","Type":"ContainerDied","Data":"24f5fe6960dd3e3d753aa28c2de9a3dc5671ad61418fe85900c0964fbf0ad391"} Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.722685 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f5fe6960dd3e3d753aa28c2de9a3dc5671ad61418fe85900c0964fbf0ad391" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.722728 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.797507 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4"] Jan 29 09:19:16 crc kubenswrapper[5031]: E0129 09:19:16.798081 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a33f933-f687-47f9-868b-02c0a633ab0f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.798105 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a33f933-f687-47f9-868b-02c0a633ab0f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.798416 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a33f933-f687-47f9-868b-02c0a633ab0f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.799181 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.801647 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.802051 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.802240 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.802503 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.802655 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.802676 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.802685 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.802746 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.806078 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4"] Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948245 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948323 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948402 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948455 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j47q\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-kube-api-access-8j47q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948490 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948543 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948708 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948770 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948856 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.948932 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.949094 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.949162 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:16 crc kubenswrapper[5031]: I0129 09:19:16.949208 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052334 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052464 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052494 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052535 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052565 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052621 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052648 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052668 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052735 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052768 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052789 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052839 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j47q\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-kube-api-access-8j47q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.052865 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.057555 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.057958 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.058122 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.059676 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.060448 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.060676 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.061485 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.062020 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.071490 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.072326 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.072817 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.075707 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j47q\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-kube-api-access-8j47q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.084040 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.116551 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.638201 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4"] Jan 29 09:19:17 crc kubenswrapper[5031]: I0129 09:19:17.732880 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" event={"ID":"49194734-e76b-4b96-bf9c-a4a73782e04b","Type":"ContainerStarted","Data":"5d402c3fdefbb199cd05d1e987fb61123582de24f27dcd9dcac3b4b8b7e19b4c"} Jan 29 09:19:18 crc kubenswrapper[5031]: I0129 09:19:18.743589 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" event={"ID":"49194734-e76b-4b96-bf9c-a4a73782e04b","Type":"ContainerStarted","Data":"e14211953a9d995fa1fb442f575b60655cfbeb7373d45f50cad443933dfdda27"} Jan 29 09:19:18 crc kubenswrapper[5031]: I0129 09:19:18.777640 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" podStartSLOduration=2.393972175 podStartE2EDuration="2.777617004s" podCreationTimestamp="2026-01-29 09:19:16 +0000 UTC" firstStartedPulling="2026-01-29 09:19:17.637735683 +0000 UTC m=+2438.137323635" lastFinishedPulling="2026-01-29 09:19:18.021380512 +0000 UTC m=+2438.520968464" observedRunningTime="2026-01-29 09:19:18.770862132 +0000 UTC m=+2439.270450094" watchObservedRunningTime="2026-01-29 09:19:18.777617004 +0000 UTC m=+2439.277204956" Jan 29 09:19:26 crc kubenswrapper[5031]: I0129 09:19:26.282905 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:19:26 crc kubenswrapper[5031]: E0129 09:19:26.283841 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:19:40 crc kubenswrapper[5031]: I0129 09:19:40.299987 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:19:40 crc kubenswrapper[5031]: E0129 09:19:40.301297 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:19:48 crc kubenswrapper[5031]: I0129 09:19:48.986408 5031 generic.go:334] "Generic (PLEG): container finished" podID="49194734-e76b-4b96-bf9c-a4a73782e04b" containerID="e14211953a9d995fa1fb442f575b60655cfbeb7373d45f50cad443933dfdda27" exitCode=0 Jan 29 09:19:48 crc kubenswrapper[5031]: I0129 09:19:48.986749 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" event={"ID":"49194734-e76b-4b96-bf9c-a4a73782e04b","Type":"ContainerDied","Data":"e14211953a9d995fa1fb442f575b60655cfbeb7373d45f50cad443933dfdda27"} Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.417008 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541207 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-repo-setup-combined-ca-bundle\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541341 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-nova-combined-ca-bundle\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541458 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j47q\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-kube-api-access-8j47q\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541512 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ssh-key-openstack-edpm-ipam\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541551 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-libvirt-combined-ca-bundle\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541574 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-neutron-metadata-combined-ca-bundle\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541618 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541690 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ceph\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541709 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-inventory\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541729 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ovn-combined-ca-bundle\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541751 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-bootstrap-combined-ca-bundle\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541769 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.541791 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"49194734-e76b-4b96-bf9c-a4a73782e04b\" (UID: \"49194734-e76b-4b96-bf9c-a4a73782e04b\") " Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.548508 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-kube-api-access-8j47q" (OuterVolumeSpecName: "kube-api-access-8j47q") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "kube-api-access-8j47q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.548640 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.549107 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.550730 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ceph" (OuterVolumeSpecName: "ceph") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.550781 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.550997 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.551544 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.552420 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.556585 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.556636 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.564595 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.580618 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-inventory" (OuterVolumeSpecName: "inventory") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.582263 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49194734-e76b-4b96-bf9c-a4a73782e04b" (UID: "49194734-e76b-4b96-bf9c-a4a73782e04b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644824 5031 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644873 5031 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644888 5031 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644906 5031 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644918 5031 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644928 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j47q\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-kube-api-access-8j47q\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644938 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644948 5031 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644958 5031 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644969 5031 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/49194734-e76b-4b96-bf9c-a4a73782e04b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644984 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.644996 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:50 crc kubenswrapper[5031]: I0129 09:19:50.645008 5031 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49194734-e76b-4b96-bf9c-a4a73782e04b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.005940 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" event={"ID":"49194734-e76b-4b96-bf9c-a4a73782e04b","Type":"ContainerDied","Data":"5d402c3fdefbb199cd05d1e987fb61123582de24f27dcd9dcac3b4b8b7e19b4c"} Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.005989 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d402c3fdefbb199cd05d1e987fb61123582de24f27dcd9dcac3b4b8b7e19b4c" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.005992 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.110458 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld"] Jan 29 09:19:51 crc kubenswrapper[5031]: E0129 09:19:51.110917 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49194734-e76b-4b96-bf9c-a4a73782e04b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.110945 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="49194734-e76b-4b96-bf9c-a4a73782e04b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.111172 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="49194734-e76b-4b96-bf9c-a4a73782e04b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.111958 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.115251 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.116997 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.117928 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.118255 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.120006 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.127550 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld"] Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.256914 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.256976 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.257153 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.257278 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wsn7\" (UniqueName: \"kubernetes.io/projected/95c8c7b7-5003-4dae-b405-74dc2263762c-kube-api-access-7wsn7\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.359203 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.359298 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wsn7\" (UniqueName: \"kubernetes.io/projected/95c8c7b7-5003-4dae-b405-74dc2263762c-kube-api-access-7wsn7\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.359458 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.359490 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.363357 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.363724 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.364797 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.377289 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wsn7\" (UniqueName: \"kubernetes.io/projected/95c8c7b7-5003-4dae-b405-74dc2263762c-kube-api-access-7wsn7\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.426781 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.956314 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:19:51 crc kubenswrapper[5031]: I0129 09:19:51.958451 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld"] Jan 29 09:19:52 crc kubenswrapper[5031]: I0129 09:19:52.014157 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" event={"ID":"95c8c7b7-5003-4dae-b405-74dc2263762c","Type":"ContainerStarted","Data":"bf042be760f69a4edd15070493da3c8eb0d9bf3340aa823e3dbe407de1c2bf01"} Jan 29 09:19:53 crc kubenswrapper[5031]: I0129 09:19:53.022881 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" event={"ID":"95c8c7b7-5003-4dae-b405-74dc2263762c","Type":"ContainerStarted","Data":"9dbe3fb64699348afe6a15eeae9a5d19bce51483cba30a8a04fde4534365c38e"} Jan 29 09:19:55 crc kubenswrapper[5031]: I0129 09:19:55.282211 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:19:55 crc kubenswrapper[5031]: E0129 09:19:55.282806 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:19:58 crc kubenswrapper[5031]: I0129 09:19:58.066566 5031 generic.go:334] "Generic (PLEG): container finished" podID="95c8c7b7-5003-4dae-b405-74dc2263762c" containerID="9dbe3fb64699348afe6a15eeae9a5d19bce51483cba30a8a04fde4534365c38e" exitCode=0 Jan 29 09:19:58 crc kubenswrapper[5031]: I0129 09:19:58.066665 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" event={"ID":"95c8c7b7-5003-4dae-b405-74dc2263762c","Type":"ContainerDied","Data":"9dbe3fb64699348afe6a15eeae9a5d19bce51483cba30a8a04fde4534365c38e"} Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.451928 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.520809 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-inventory\") pod \"95c8c7b7-5003-4dae-b405-74dc2263762c\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.520904 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ceph\") pod \"95c8c7b7-5003-4dae-b405-74dc2263762c\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.521078 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ssh-key-openstack-edpm-ipam\") pod \"95c8c7b7-5003-4dae-b405-74dc2263762c\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.521165 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wsn7\" (UniqueName: \"kubernetes.io/projected/95c8c7b7-5003-4dae-b405-74dc2263762c-kube-api-access-7wsn7\") pod \"95c8c7b7-5003-4dae-b405-74dc2263762c\" (UID: \"95c8c7b7-5003-4dae-b405-74dc2263762c\") " Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.526955 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ceph" (OuterVolumeSpecName: "ceph") pod "95c8c7b7-5003-4dae-b405-74dc2263762c" (UID: "95c8c7b7-5003-4dae-b405-74dc2263762c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.533556 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c8c7b7-5003-4dae-b405-74dc2263762c-kube-api-access-7wsn7" (OuterVolumeSpecName: "kube-api-access-7wsn7") pod "95c8c7b7-5003-4dae-b405-74dc2263762c" (UID: "95c8c7b7-5003-4dae-b405-74dc2263762c"). InnerVolumeSpecName "kube-api-access-7wsn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.547813 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "95c8c7b7-5003-4dae-b405-74dc2263762c" (UID: "95c8c7b7-5003-4dae-b405-74dc2263762c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.548012 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-inventory" (OuterVolumeSpecName: "inventory") pod "95c8c7b7-5003-4dae-b405-74dc2263762c" (UID: "95c8c7b7-5003-4dae-b405-74dc2263762c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.623705 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wsn7\" (UniqueName: \"kubernetes.io/projected/95c8c7b7-5003-4dae-b405-74dc2263762c-kube-api-access-7wsn7\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.623741 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.623752 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:59 crc kubenswrapper[5031]: I0129 09:19:59.623762 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c8c7b7-5003-4dae-b405-74dc2263762c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.086876 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" event={"ID":"95c8c7b7-5003-4dae-b405-74dc2263762c","Type":"ContainerDied","Data":"bf042be760f69a4edd15070493da3c8eb0d9bf3340aa823e3dbe407de1c2bf01"} Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.086918 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf042be760f69a4edd15070493da3c8eb0d9bf3340aa823e3dbe407de1c2bf01" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.086961 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.158494 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49"] Jan 29 09:20:00 crc kubenswrapper[5031]: E0129 09:20:00.158936 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c8c7b7-5003-4dae-b405-74dc2263762c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.158956 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c8c7b7-5003-4dae-b405-74dc2263762c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.159165 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c8c7b7-5003-4dae-b405-74dc2263762c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.159813 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.164047 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.164224 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.164404 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.164535 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.164617 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.167308 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.172202 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49"] Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.234013 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.234084 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.234177 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.234320 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8rc\" (UniqueName: \"kubernetes.io/projected/764d97ce-43f8-4cce-9b06-61f1a548199f-kube-api-access-tx8rc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.234508 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/764d97ce-43f8-4cce-9b06-61f1a548199f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.234631 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.336330 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.336412 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.336457 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.336566 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.336602 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx8rc\" (UniqueName: \"kubernetes.io/projected/764d97ce-43f8-4cce-9b06-61f1a548199f-kube-api-access-tx8rc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.336648 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/764d97ce-43f8-4cce-9b06-61f1a548199f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.337841 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/764d97ce-43f8-4cce-9b06-61f1a548199f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.345124 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.345506 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.345727 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.346388 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.356621 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx8rc\" (UniqueName: \"kubernetes.io/projected/764d97ce-43f8-4cce-9b06-61f1a548199f-kube-api-access-tx8rc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kdq49\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:00 crc kubenswrapper[5031]: I0129 09:20:00.481492 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:20:01 crc kubenswrapper[5031]: I0129 09:20:01.024868 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49"] Jan 29 09:20:01 crc kubenswrapper[5031]: I0129 09:20:01.099815 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" event={"ID":"764d97ce-43f8-4cce-9b06-61f1a548199f","Type":"ContainerStarted","Data":"49bc5d1cd3900e8e3b90ac26809e8e3db70722839ab4461730a928f36f27ea69"} Jan 29 09:20:02 crc kubenswrapper[5031]: I0129 09:20:02.109776 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" event={"ID":"764d97ce-43f8-4cce-9b06-61f1a548199f","Type":"ContainerStarted","Data":"266e4dffab757451901c5b6170a2c7cea95fc7c1651d07582789b7a791ac5605"} Jan 29 09:20:02 crc kubenswrapper[5031]: I0129 09:20:02.129528 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" podStartSLOduration=1.46733721 podStartE2EDuration="2.129506592s" podCreationTimestamp="2026-01-29 09:20:00 +0000 UTC" firstStartedPulling="2026-01-29 09:20:01.027349232 +0000 UTC m=+2481.526937184" lastFinishedPulling="2026-01-29 09:20:01.689518614 +0000 UTC m=+2482.189106566" observedRunningTime="2026-01-29 09:20:02.127672533 +0000 UTC m=+2482.627260485" watchObservedRunningTime="2026-01-29 09:20:02.129506592 +0000 UTC m=+2482.629094544" Jan 29 09:20:07 crc kubenswrapper[5031]: I0129 09:20:07.283981 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:20:07 crc kubenswrapper[5031]: E0129 09:20:07.284680 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:20:19 crc kubenswrapper[5031]: I0129 09:20:19.283042 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:20:19 crc kubenswrapper[5031]: E0129 09:20:19.283988 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:20:34 crc kubenswrapper[5031]: I0129 09:20:34.283488 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:20:34 crc kubenswrapper[5031]: E0129 09:20:34.286031 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:20:45 crc kubenswrapper[5031]: I0129 09:20:45.281954 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:20:45 crc kubenswrapper[5031]: E0129 09:20:45.282892 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:20:58 crc kubenswrapper[5031]: I0129 09:20:58.283527 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:20:58 crc kubenswrapper[5031]: E0129 09:20:58.284354 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:21:09 crc kubenswrapper[5031]: I0129 09:21:09.282750 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:21:09 crc kubenswrapper[5031]: I0129 09:21:09.683975 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"6ba1b771933fda7cf3c2cbf7b45f2473fcaa9f1b15e8d86548eef69a32f57643"} Jan 29 09:21:10 crc kubenswrapper[5031]: I0129 09:21:10.693209 5031 generic.go:334] "Generic (PLEG): container finished" podID="764d97ce-43f8-4cce-9b06-61f1a548199f" containerID="266e4dffab757451901c5b6170a2c7cea95fc7c1651d07582789b7a791ac5605" exitCode=0 Jan 29 09:21:10 crc kubenswrapper[5031]: I0129 09:21:10.693300 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" event={"ID":"764d97ce-43f8-4cce-9b06-61f1a548199f","Type":"ContainerDied","Data":"266e4dffab757451901c5b6170a2c7cea95fc7c1651d07582789b7a791ac5605"} Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.203917 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.271687 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ssh-key-openstack-edpm-ipam\") pod \"764d97ce-43f8-4cce-9b06-61f1a548199f\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.272744 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ovn-combined-ca-bundle\") pod \"764d97ce-43f8-4cce-9b06-61f1a548199f\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.272843 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/764d97ce-43f8-4cce-9b06-61f1a548199f-ovncontroller-config-0\") pod \"764d97ce-43f8-4cce-9b06-61f1a548199f\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.272907 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-inventory\") pod \"764d97ce-43f8-4cce-9b06-61f1a548199f\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.273118 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ceph\") pod \"764d97ce-43f8-4cce-9b06-61f1a548199f\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.273495 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx8rc\" (UniqueName: \"kubernetes.io/projected/764d97ce-43f8-4cce-9b06-61f1a548199f-kube-api-access-tx8rc\") pod \"764d97ce-43f8-4cce-9b06-61f1a548199f\" (UID: \"764d97ce-43f8-4cce-9b06-61f1a548199f\") " Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.279487 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764d97ce-43f8-4cce-9b06-61f1a548199f-kube-api-access-tx8rc" (OuterVolumeSpecName: "kube-api-access-tx8rc") pod "764d97ce-43f8-4cce-9b06-61f1a548199f" (UID: "764d97ce-43f8-4cce-9b06-61f1a548199f"). InnerVolumeSpecName "kube-api-access-tx8rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.280436 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ceph" (OuterVolumeSpecName: "ceph") pod "764d97ce-43f8-4cce-9b06-61f1a548199f" (UID: "764d97ce-43f8-4cce-9b06-61f1a548199f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.300388 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/764d97ce-43f8-4cce-9b06-61f1a548199f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "764d97ce-43f8-4cce-9b06-61f1a548199f" (UID: "764d97ce-43f8-4cce-9b06-61f1a548199f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.303085 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "764d97ce-43f8-4cce-9b06-61f1a548199f" (UID: "764d97ce-43f8-4cce-9b06-61f1a548199f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.305186 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-inventory" (OuterVolumeSpecName: "inventory") pod "764d97ce-43f8-4cce-9b06-61f1a548199f" (UID: "764d97ce-43f8-4cce-9b06-61f1a548199f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.307645 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "764d97ce-43f8-4cce-9b06-61f1a548199f" (UID: "764d97ce-43f8-4cce-9b06-61f1a548199f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.376109 5031 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/764d97ce-43f8-4cce-9b06-61f1a548199f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.376151 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.376161 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.376171 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx8rc\" (UniqueName: \"kubernetes.io/projected/764d97ce-43f8-4cce-9b06-61f1a548199f-kube-api-access-tx8rc\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.376180 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.376188 5031 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764d97ce-43f8-4cce-9b06-61f1a548199f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.710976 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" event={"ID":"764d97ce-43f8-4cce-9b06-61f1a548199f","Type":"ContainerDied","Data":"49bc5d1cd3900e8e3b90ac26809e8e3db70722839ab4461730a928f36f27ea69"} Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.711016 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bc5d1cd3900e8e3b90ac26809e8e3db70722839ab4461730a928f36f27ea69" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.711066 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kdq49" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.830945 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2"] Jan 29 09:21:12 crc kubenswrapper[5031]: E0129 09:21:12.831360 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764d97ce-43f8-4cce-9b06-61f1a548199f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.831390 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="764d97ce-43f8-4cce-9b06-61f1a548199f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.831587 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="764d97ce-43f8-4cce-9b06-61f1a548199f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.832160 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.835125 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.835524 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.835527 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.836146 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.839095 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.839342 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.849938 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.856952 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2"] Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.885859 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.886240 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.886474 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.886632 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.886810 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.886891 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5mj\" (UniqueName: \"kubernetes.io/projected/5e820097-42d1-47ac-84d1-824842f92b8b-kube-api-access-mv5mj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.886968 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.988914 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.989056 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.989092 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5mj\" (UniqueName: \"kubernetes.io/projected/5e820097-42d1-47ac-84d1-824842f92b8b-kube-api-access-mv5mj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.989115 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.989152 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.989193 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.989225 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.995203 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.995232 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.995258 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.995397 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.997794 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:12 crc kubenswrapper[5031]: I0129 09:21:12.998300 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:13 crc kubenswrapper[5031]: I0129 09:21:13.006184 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5mj\" (UniqueName: \"kubernetes.io/projected/5e820097-42d1-47ac-84d1-824842f92b8b-kube-api-access-mv5mj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:13 crc kubenswrapper[5031]: I0129 09:21:13.162353 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:21:13 crc kubenswrapper[5031]: I0129 09:21:13.708062 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2"] Jan 29 09:21:13 crc kubenswrapper[5031]: I0129 09:21:13.724864 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" event={"ID":"5e820097-42d1-47ac-84d1-824842f92b8b","Type":"ContainerStarted","Data":"62df4f117f3e13b9eef259f60fe23bbdb7b01348d3655bef58484f0539d27814"} Jan 29 09:21:14 crc kubenswrapper[5031]: I0129 09:21:14.735481 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" event={"ID":"5e820097-42d1-47ac-84d1-824842f92b8b","Type":"ContainerStarted","Data":"e53fa177d39f60d7fe9855f743c2ec27c3d0721653017e9c19e73d55413372cf"} Jan 29 09:21:14 crc kubenswrapper[5031]: I0129 09:21:14.760176 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" podStartSLOduration=2.300522741 podStartE2EDuration="2.760151844s" podCreationTimestamp="2026-01-29 09:21:12 +0000 UTC" firstStartedPulling="2026-01-29 09:21:13.712502207 +0000 UTC m=+2554.212090159" lastFinishedPulling="2026-01-29 09:21:14.17213131 +0000 UTC m=+2554.671719262" observedRunningTime="2026-01-29 09:21:14.751303096 +0000 UTC m=+2555.250891048" watchObservedRunningTime="2026-01-29 09:21:14.760151844 +0000 UTC m=+2555.259739796" Jan 29 09:21:43 crc kubenswrapper[5031]: E0129 09:21:43.354939 5031 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.073s" Jan 29 09:22:10 crc kubenswrapper[5031]: I0129 09:22:10.600721 5031 generic.go:334] "Generic (PLEG): container finished" podID="5e820097-42d1-47ac-84d1-824842f92b8b" containerID="e53fa177d39f60d7fe9855f743c2ec27c3d0721653017e9c19e73d55413372cf" exitCode=0 Jan 29 09:22:10 crc kubenswrapper[5031]: I0129 09:22:10.600818 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" event={"ID":"5e820097-42d1-47ac-84d1-824842f92b8b","Type":"ContainerDied","Data":"e53fa177d39f60d7fe9855f743c2ec27c3d0721653017e9c19e73d55413372cf"} Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.050190 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.210480 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-nova-metadata-neutron-config-0\") pod \"5e820097-42d1-47ac-84d1-824842f92b8b\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.210945 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-inventory\") pod \"5e820097-42d1-47ac-84d1-824842f92b8b\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.210981 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ssh-key-openstack-edpm-ipam\") pod \"5e820097-42d1-47ac-84d1-824842f92b8b\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.211898 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-metadata-combined-ca-bundle\") pod \"5e820097-42d1-47ac-84d1-824842f92b8b\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.211937 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ceph\") pod \"5e820097-42d1-47ac-84d1-824842f92b8b\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.212472 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5e820097-42d1-47ac-84d1-824842f92b8b\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.212568 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv5mj\" (UniqueName: \"kubernetes.io/projected/5e820097-42d1-47ac-84d1-824842f92b8b-kube-api-access-mv5mj\") pod \"5e820097-42d1-47ac-84d1-824842f92b8b\" (UID: \"5e820097-42d1-47ac-84d1-824842f92b8b\") " Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.218135 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e820097-42d1-47ac-84d1-824842f92b8b-kube-api-access-mv5mj" (OuterVolumeSpecName: "kube-api-access-mv5mj") pod "5e820097-42d1-47ac-84d1-824842f92b8b" (UID: "5e820097-42d1-47ac-84d1-824842f92b8b"). InnerVolumeSpecName "kube-api-access-mv5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.218939 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ceph" (OuterVolumeSpecName: "ceph") pod "5e820097-42d1-47ac-84d1-824842f92b8b" (UID: "5e820097-42d1-47ac-84d1-824842f92b8b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.222039 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5e820097-42d1-47ac-84d1-824842f92b8b" (UID: "5e820097-42d1-47ac-84d1-824842f92b8b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.241839 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5e820097-42d1-47ac-84d1-824842f92b8b" (UID: "5e820097-42d1-47ac-84d1-824842f92b8b"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.248027 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-inventory" (OuterVolumeSpecName: "inventory") pod "5e820097-42d1-47ac-84d1-824842f92b8b" (UID: "5e820097-42d1-47ac-84d1-824842f92b8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.250647 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e820097-42d1-47ac-84d1-824842f92b8b" (UID: "5e820097-42d1-47ac-84d1-824842f92b8b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.250804 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5e820097-42d1-47ac-84d1-824842f92b8b" (UID: "5e820097-42d1-47ac-84d1-824842f92b8b"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.316332 5031 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.316389 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.316401 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.316412 5031 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.316424 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.316466 5031 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e820097-42d1-47ac-84d1-824842f92b8b-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.316482 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv5mj\" (UniqueName: \"kubernetes.io/projected/5e820097-42d1-47ac-84d1-824842f92b8b-kube-api-access-mv5mj\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.620305 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" event={"ID":"5e820097-42d1-47ac-84d1-824842f92b8b","Type":"ContainerDied","Data":"62df4f117f3e13b9eef259f60fe23bbdb7b01348d3655bef58484f0539d27814"} Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.620360 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62df4f117f3e13b9eef259f60fe23bbdb7b01348d3655bef58484f0539d27814" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.620763 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.819575 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526"] Jan 29 09:22:12 crc kubenswrapper[5031]: E0129 09:22:12.820026 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e820097-42d1-47ac-84d1-824842f92b8b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.820047 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e820097-42d1-47ac-84d1-824842f92b8b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.820231 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e820097-42d1-47ac-84d1-824842f92b8b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.820929 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.824120 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.824508 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.824603 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.825498 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.826006 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.827813 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.842824 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526"] Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.927070 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.927134 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.927181 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.927224 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.927297 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:12 crc kubenswrapper[5031]: I0129 09:22:12.927533 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qkv\" (UniqueName: \"kubernetes.io/projected/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-kube-api-access-r6qkv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.029977 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qkv\" (UniqueName: \"kubernetes.io/projected/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-kube-api-access-r6qkv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.030089 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.030117 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.030154 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.030194 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.030243 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.037036 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.037351 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.043032 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.043121 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.046399 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.048128 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qkv\" (UniqueName: \"kubernetes.io/projected/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-kube-api-access-r6qkv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7z526\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.140125 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:22:13 crc kubenswrapper[5031]: I0129 09:22:13.693244 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526"] Jan 29 09:22:14 crc kubenswrapper[5031]: I0129 09:22:14.643403 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" event={"ID":"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc","Type":"ContainerStarted","Data":"a5e5b87d20e8db2fce156c727bafa3dc94d024c1202f238c8bc8efb6d0345efd"} Jan 29 09:22:14 crc kubenswrapper[5031]: I0129 09:22:14.643685 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" event={"ID":"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc","Type":"ContainerStarted","Data":"2bfb2d24e2ec5051096f70c8fe35bc7365a6c09926ca7b88110a0eed1162e26f"} Jan 29 09:22:14 crc kubenswrapper[5031]: I0129 09:22:14.673073 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" podStartSLOduration=2.115797401 podStartE2EDuration="2.67304912s" podCreationTimestamp="2026-01-29 09:22:12 +0000 UTC" firstStartedPulling="2026-01-29 09:22:13.705203801 +0000 UTC m=+2614.204791753" lastFinishedPulling="2026-01-29 09:22:14.26245552 +0000 UTC m=+2614.762043472" observedRunningTime="2026-01-29 09:22:14.662820336 +0000 UTC m=+2615.162408288" watchObservedRunningTime="2026-01-29 09:22:14.67304912 +0000 UTC m=+2615.172637072" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.089316 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrm5b"] Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.092496 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.100751 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrm5b"] Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.187650 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-utilities\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.187713 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtwpl\" (UniqueName: \"kubernetes.io/projected/4496f045-3bd7-442d-8e81-27e272a6b14d-kube-api-access-wtwpl\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.187791 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-catalog-content\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.288743 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-utilities\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.289426 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtwpl\" (UniqueName: \"kubernetes.io/projected/4496f045-3bd7-442d-8e81-27e272a6b14d-kube-api-access-wtwpl\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.289576 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-catalog-content\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.289262 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-utilities\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.290707 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-catalog-content\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.320743 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtwpl\" (UniqueName: \"kubernetes.io/projected/4496f045-3bd7-442d-8e81-27e272a6b14d-kube-api-access-wtwpl\") pod \"redhat-operators-qrm5b\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.415355 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:30 crc kubenswrapper[5031]: I0129 09:22:30.885265 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrm5b"] Jan 29 09:22:31 crc kubenswrapper[5031]: I0129 09:22:31.784833 5031 generic.go:334] "Generic (PLEG): container finished" podID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerID="a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768" exitCode=0 Jan 29 09:22:31 crc kubenswrapper[5031]: I0129 09:22:31.784889 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrm5b" event={"ID":"4496f045-3bd7-442d-8e81-27e272a6b14d","Type":"ContainerDied","Data":"a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768"} Jan 29 09:22:31 crc kubenswrapper[5031]: I0129 09:22:31.785093 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrm5b" event={"ID":"4496f045-3bd7-442d-8e81-27e272a6b14d","Type":"ContainerStarted","Data":"5a34eb35f5475707b9a8c996441a87b3ed1f628ed2aca27df9a79361d235a9e1"} Jan 29 09:22:32 crc kubenswrapper[5031]: I0129 09:22:32.794615 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrm5b" event={"ID":"4496f045-3bd7-442d-8e81-27e272a6b14d","Type":"ContainerStarted","Data":"b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4"} Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.752139 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcbqp"] Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.755563 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.774983 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcbqp"] Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.841691 5031 generic.go:334] "Generic (PLEG): container finished" podID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerID="b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4" exitCode=0 Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.841750 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrm5b" event={"ID":"4496f045-3bd7-442d-8e81-27e272a6b14d","Type":"ContainerDied","Data":"b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4"} Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.907121 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-utilities\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.907306 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-catalog-content\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:35 crc kubenswrapper[5031]: I0129 09:22:35.907331 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fn9\" (UniqueName: \"kubernetes.io/projected/47da0b51-b027-4bf6-a78f-4e41228a85ed-kube-api-access-t8fn9\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.009578 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-catalog-content\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.009645 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8fn9\" (UniqueName: \"kubernetes.io/projected/47da0b51-b027-4bf6-a78f-4e41228a85ed-kube-api-access-t8fn9\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.009712 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-utilities\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.010111 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-catalog-content\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.010428 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-utilities\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.031291 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8fn9\" (UniqueName: \"kubernetes.io/projected/47da0b51-b027-4bf6-a78f-4e41228a85ed-kube-api-access-t8fn9\") pod \"redhat-marketplace-lcbqp\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.079172 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.577058 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcbqp"] Jan 29 09:22:36 crc kubenswrapper[5031]: W0129 09:22:36.579615 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47da0b51_b027_4bf6_a78f_4e41228a85ed.slice/crio-51001c963bf0cc11ea4f372209b6f1939fc4736549cd130fd6f8de473612b0ce WatchSource:0}: Error finding container 51001c963bf0cc11ea4f372209b6f1939fc4736549cd130fd6f8de473612b0ce: Status 404 returned error can't find the container with id 51001c963bf0cc11ea4f372209b6f1939fc4736549cd130fd6f8de473612b0ce Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.858761 5031 generic.go:334] "Generic (PLEG): container finished" podID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerID="ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81" exitCode=0 Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.858888 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcbqp" event={"ID":"47da0b51-b027-4bf6-a78f-4e41228a85ed","Type":"ContainerDied","Data":"ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81"} Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.858958 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcbqp" event={"ID":"47da0b51-b027-4bf6-a78f-4e41228a85ed","Type":"ContainerStarted","Data":"51001c963bf0cc11ea4f372209b6f1939fc4736549cd130fd6f8de473612b0ce"} Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.865436 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrm5b" event={"ID":"4496f045-3bd7-442d-8e81-27e272a6b14d","Type":"ContainerStarted","Data":"0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240"} Jan 29 09:22:36 crc kubenswrapper[5031]: I0129 09:22:36.908545 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrm5b" podStartSLOduration=2.348988039 podStartE2EDuration="6.908526052s" podCreationTimestamp="2026-01-29 09:22:30 +0000 UTC" firstStartedPulling="2026-01-29 09:22:31.78654053 +0000 UTC m=+2632.286128492" lastFinishedPulling="2026-01-29 09:22:36.346078553 +0000 UTC m=+2636.845666505" observedRunningTime="2026-01-29 09:22:36.901492544 +0000 UTC m=+2637.401080506" watchObservedRunningTime="2026-01-29 09:22:36.908526052 +0000 UTC m=+2637.408113994" Jan 29 09:22:37 crc kubenswrapper[5031]: I0129 09:22:37.878630 5031 generic.go:334] "Generic (PLEG): container finished" podID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerID="c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d" exitCode=0 Jan 29 09:22:37 crc kubenswrapper[5031]: I0129 09:22:37.878699 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcbqp" event={"ID":"47da0b51-b027-4bf6-a78f-4e41228a85ed","Type":"ContainerDied","Data":"c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d"} Jan 29 09:22:38 crc kubenswrapper[5031]: I0129 09:22:38.887998 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcbqp" event={"ID":"47da0b51-b027-4bf6-a78f-4e41228a85ed","Type":"ContainerStarted","Data":"e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb"} Jan 29 09:22:38 crc kubenswrapper[5031]: I0129 09:22:38.920968 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcbqp" podStartSLOduration=2.393866584 podStartE2EDuration="3.920949926s" podCreationTimestamp="2026-01-29 09:22:35 +0000 UTC" firstStartedPulling="2026-01-29 09:22:36.861401409 +0000 UTC m=+2637.360989361" lastFinishedPulling="2026-01-29 09:22:38.388484761 +0000 UTC m=+2638.888072703" observedRunningTime="2026-01-29 09:22:38.908954946 +0000 UTC m=+2639.408542908" watchObservedRunningTime="2026-01-29 09:22:38.920949926 +0000 UTC m=+2639.420537878" Jan 29 09:22:40 crc kubenswrapper[5031]: I0129 09:22:40.416077 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:40 crc kubenswrapper[5031]: I0129 09:22:40.416491 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:22:41 crc kubenswrapper[5031]: I0129 09:22:41.458541 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrm5b" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="registry-server" probeResult="failure" output=< Jan 29 09:22:41 crc kubenswrapper[5031]: timeout: failed to connect service ":50051" within 1s Jan 29 09:22:41 crc kubenswrapper[5031]: > Jan 29 09:22:46 crc kubenswrapper[5031]: I0129 09:22:46.080654 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:46 crc kubenswrapper[5031]: I0129 09:22:46.081233 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:46 crc kubenswrapper[5031]: I0129 09:22:46.125864 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:46 crc kubenswrapper[5031]: I0129 09:22:46.276845 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:46 crc kubenswrapper[5031]: I0129 09:22:46.362223 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcbqp"] Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.247457 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lcbqp" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="registry-server" containerID="cri-o://e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb" gracePeriod=2 Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.660336 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.696292 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-catalog-content\") pod \"47da0b51-b027-4bf6-a78f-4e41228a85ed\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.696415 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8fn9\" (UniqueName: \"kubernetes.io/projected/47da0b51-b027-4bf6-a78f-4e41228a85ed-kube-api-access-t8fn9\") pod \"47da0b51-b027-4bf6-a78f-4e41228a85ed\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.696558 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-utilities\") pod \"47da0b51-b027-4bf6-a78f-4e41228a85ed\" (UID: \"47da0b51-b027-4bf6-a78f-4e41228a85ed\") " Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.697589 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-utilities" (OuterVolumeSpecName: "utilities") pod "47da0b51-b027-4bf6-a78f-4e41228a85ed" (UID: "47da0b51-b027-4bf6-a78f-4e41228a85ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.702269 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47da0b51-b027-4bf6-a78f-4e41228a85ed-kube-api-access-t8fn9" (OuterVolumeSpecName: "kube-api-access-t8fn9") pod "47da0b51-b027-4bf6-a78f-4e41228a85ed" (UID: "47da0b51-b027-4bf6-a78f-4e41228a85ed"). InnerVolumeSpecName "kube-api-access-t8fn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.715824 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47da0b51-b027-4bf6-a78f-4e41228a85ed" (UID: "47da0b51-b027-4bf6-a78f-4e41228a85ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.798304 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.798350 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8fn9\" (UniqueName: \"kubernetes.io/projected/47da0b51-b027-4bf6-a78f-4e41228a85ed-kube-api-access-t8fn9\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:48 crc kubenswrapper[5031]: I0129 09:22:48.798371 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47da0b51-b027-4bf6-a78f-4e41228a85ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.257844 5031 generic.go:334] "Generic (PLEG): container finished" podID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerID="e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb" exitCode=0 Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.257898 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcbqp" event={"ID":"47da0b51-b027-4bf6-a78f-4e41228a85ed","Type":"ContainerDied","Data":"e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb"} Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.257974 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcbqp" event={"ID":"47da0b51-b027-4bf6-a78f-4e41228a85ed","Type":"ContainerDied","Data":"51001c963bf0cc11ea4f372209b6f1939fc4736549cd130fd6f8de473612b0ce"} Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.257989 5031 scope.go:117] "RemoveContainer" containerID="e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.258963 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcbqp" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.279385 5031 scope.go:117] "RemoveContainer" containerID="c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.294610 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcbqp"] Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.304888 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcbqp"] Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.315268 5031 scope.go:117] "RemoveContainer" containerID="ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.355194 5031 scope.go:117] "RemoveContainer" containerID="e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb" Jan 29 09:22:49 crc kubenswrapper[5031]: E0129 09:22:49.355988 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb\": container with ID starting with e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb not found: ID does not exist" containerID="e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.356022 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb"} err="failed to get container status \"e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb\": rpc error: code = NotFound desc = could not find container \"e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb\": container with ID starting with e1cd7608fc2c57d3920da99ac40ddd6486452b9ed52e7d797f444bb361157cdb not found: ID does not exist" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.356047 5031 scope.go:117] "RemoveContainer" containerID="c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d" Jan 29 09:22:49 crc kubenswrapper[5031]: E0129 09:22:49.356491 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d\": container with ID starting with c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d not found: ID does not exist" containerID="c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.356542 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d"} err="failed to get container status \"c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d\": rpc error: code = NotFound desc = could not find container \"c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d\": container with ID starting with c9b276e4b000cd72c49bdde88ecaed4da3b476ea59a855637102819c71475b1d not found: ID does not exist" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.356571 5031 scope.go:117] "RemoveContainer" containerID="ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81" Jan 29 09:22:49 crc kubenswrapper[5031]: E0129 09:22:49.357091 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81\": container with ID starting with ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81 not found: ID does not exist" containerID="ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81" Jan 29 09:22:49 crc kubenswrapper[5031]: I0129 09:22:49.357165 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81"} err="failed to get container status \"ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81\": rpc error: code = NotFound desc = could not find container \"ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81\": container with ID starting with ac4e7c482385d3a112a0422720a5faf30eaafd789c3abfac3bbc48583f21bb81 not found: ID does not exist" Jan 29 09:22:50 crc kubenswrapper[5031]: I0129 09:22:50.294292 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" path="/var/lib/kubelet/pods/47da0b51-b027-4bf6-a78f-4e41228a85ed/volumes" Jan 29 09:22:51 crc kubenswrapper[5031]: I0129 09:22:51.461494 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrm5b" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="registry-server" probeResult="failure" output=< Jan 29 09:22:51 crc kubenswrapper[5031]: timeout: failed to connect service ":50051" within 1s Jan 29 09:22:51 crc kubenswrapper[5031]: > Jan 29 09:23:00 crc kubenswrapper[5031]: I0129 09:23:00.460483 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:23:00 crc kubenswrapper[5031]: I0129 09:23:00.508211 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:23:01 crc kubenswrapper[5031]: I0129 09:23:01.292111 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrm5b"] Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.442753 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrm5b" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="registry-server" containerID="cri-o://0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240" gracePeriod=2 Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.889823 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.984276 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtwpl\" (UniqueName: \"kubernetes.io/projected/4496f045-3bd7-442d-8e81-27e272a6b14d-kube-api-access-wtwpl\") pod \"4496f045-3bd7-442d-8e81-27e272a6b14d\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.984405 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-utilities\") pod \"4496f045-3bd7-442d-8e81-27e272a6b14d\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.984475 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-catalog-content\") pod \"4496f045-3bd7-442d-8e81-27e272a6b14d\" (UID: \"4496f045-3bd7-442d-8e81-27e272a6b14d\") " Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.984876 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-utilities" (OuterVolumeSpecName: "utilities") pod "4496f045-3bd7-442d-8e81-27e272a6b14d" (UID: "4496f045-3bd7-442d-8e81-27e272a6b14d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.985162 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:02 crc kubenswrapper[5031]: I0129 09:23:02.989911 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4496f045-3bd7-442d-8e81-27e272a6b14d-kube-api-access-wtwpl" (OuterVolumeSpecName: "kube-api-access-wtwpl") pod "4496f045-3bd7-442d-8e81-27e272a6b14d" (UID: "4496f045-3bd7-442d-8e81-27e272a6b14d"). InnerVolumeSpecName "kube-api-access-wtwpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.085945 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtwpl\" (UniqueName: \"kubernetes.io/projected/4496f045-3bd7-442d-8e81-27e272a6b14d-kube-api-access-wtwpl\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.103595 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4496f045-3bd7-442d-8e81-27e272a6b14d" (UID: "4496f045-3bd7-442d-8e81-27e272a6b14d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.187284 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4496f045-3bd7-442d-8e81-27e272a6b14d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.478610 5031 generic.go:334] "Generic (PLEG): container finished" podID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerID="0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240" exitCode=0 Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.478688 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrm5b" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.478713 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrm5b" event={"ID":"4496f045-3bd7-442d-8e81-27e272a6b14d","Type":"ContainerDied","Data":"0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240"} Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.481422 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrm5b" event={"ID":"4496f045-3bd7-442d-8e81-27e272a6b14d","Type":"ContainerDied","Data":"5a34eb35f5475707b9a8c996441a87b3ed1f628ed2aca27df9a79361d235a9e1"} Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.481447 5031 scope.go:117] "RemoveContainer" containerID="0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.518847 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrm5b"] Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.526662 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrm5b"] Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.531671 5031 scope.go:117] "RemoveContainer" containerID="b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.562256 5031 scope.go:117] "RemoveContainer" containerID="a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.591095 5031 scope.go:117] "RemoveContainer" containerID="0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240" Jan 29 09:23:03 crc kubenswrapper[5031]: E0129 09:23:03.592471 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240\": container with ID starting with 0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240 not found: ID does not exist" containerID="0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.592504 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240"} err="failed to get container status \"0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240\": rpc error: code = NotFound desc = could not find container \"0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240\": container with ID starting with 0f36b11f41dbfaec2623a4efcaf5e3612a6839ae136f01022a7ea265ebbbc240 not found: ID does not exist" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.592528 5031 scope.go:117] "RemoveContainer" containerID="b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4" Jan 29 09:23:03 crc kubenswrapper[5031]: E0129 09:23:03.592872 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4\": container with ID starting with b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4 not found: ID does not exist" containerID="b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.592898 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4"} err="failed to get container status \"b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4\": rpc error: code = NotFound desc = could not find container \"b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4\": container with ID starting with b8ad7d4447afd776c8fb7b0dafeaf703efd55fa4be2bb2787c2790015c5ebda4 not found: ID does not exist" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.592917 5031 scope.go:117] "RemoveContainer" containerID="a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768" Jan 29 09:23:03 crc kubenswrapper[5031]: E0129 09:23:03.593177 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768\": container with ID starting with a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768 not found: ID does not exist" containerID="a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768" Jan 29 09:23:03 crc kubenswrapper[5031]: I0129 09:23:03.593207 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768"} err="failed to get container status \"a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768\": rpc error: code = NotFound desc = could not find container \"a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768\": container with ID starting with a044267020e7817a24b6a5268497dc6e3f90f6a532fd5dd8268c79ab51822768 not found: ID does not exist" Jan 29 09:23:04 crc kubenswrapper[5031]: I0129 09:23:04.296105 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" path="/var/lib/kubelet/pods/4496f045-3bd7-442d-8e81-27e272a6b14d/volumes" Jan 29 09:23:38 crc kubenswrapper[5031]: I0129 09:23:38.493672 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:23:38 crc kubenswrapper[5031]: I0129 09:23:38.494748 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:24:08 crc kubenswrapper[5031]: I0129 09:24:08.493485 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:24:08 crc kubenswrapper[5031]: I0129 09:24:08.493997 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:24:38 crc kubenswrapper[5031]: I0129 09:24:38.493979 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:24:38 crc kubenswrapper[5031]: I0129 09:24:38.494576 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:24:38 crc kubenswrapper[5031]: I0129 09:24:38.494618 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:24:38 crc kubenswrapper[5031]: I0129 09:24:38.495317 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ba1b771933fda7cf3c2cbf7b45f2473fcaa9f1b15e8d86548eef69a32f57643"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:24:38 crc kubenswrapper[5031]: I0129 09:24:38.495417 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://6ba1b771933fda7cf3c2cbf7b45f2473fcaa9f1b15e8d86548eef69a32f57643" gracePeriod=600 Jan 29 09:24:39 crc kubenswrapper[5031]: I0129 09:24:39.337361 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="6ba1b771933fda7cf3c2cbf7b45f2473fcaa9f1b15e8d86548eef69a32f57643" exitCode=0 Jan 29 09:24:39 crc kubenswrapper[5031]: I0129 09:24:39.337422 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"6ba1b771933fda7cf3c2cbf7b45f2473fcaa9f1b15e8d86548eef69a32f57643"} Jan 29 09:24:39 crc kubenswrapper[5031]: I0129 09:24:39.337994 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b"} Jan 29 09:24:39 crc kubenswrapper[5031]: I0129 09:24:39.338015 5031 scope.go:117] "RemoveContainer" containerID="09b59c4f9b723518c8d995803be81d4f449abe4d02e3b41db2384c3ce9c8fe3d" Jan 29 09:26:08 crc kubenswrapper[5031]: I0129 09:26:08.066744 5031 generic.go:334] "Generic (PLEG): container finished" podID="4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" containerID="a5e5b87d20e8db2fce156c727bafa3dc94d024c1202f238c8bc8efb6d0345efd" exitCode=0 Jan 29 09:26:08 crc kubenswrapper[5031]: I0129 09:26:08.066857 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" event={"ID":"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc","Type":"ContainerDied","Data":"a5e5b87d20e8db2fce156c727bafa3dc94d024c1202f238c8bc8efb6d0345efd"} Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.537135 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.623578 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-secret-0\") pod \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.623779 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-inventory\") pod \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.623816 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ssh-key-openstack-edpm-ipam\") pod \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.623846 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ceph\") pod \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.623918 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-combined-ca-bundle\") pod \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.623964 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6qkv\" (UniqueName: \"kubernetes.io/projected/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-kube-api-access-r6qkv\") pod \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\" (UID: \"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc\") " Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.629866 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-kube-api-access-r6qkv" (OuterVolumeSpecName: "kube-api-access-r6qkv") pod "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" (UID: "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc"). InnerVolumeSpecName "kube-api-access-r6qkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.633835 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ceph" (OuterVolumeSpecName: "ceph") pod "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" (UID: "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.633858 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" (UID: "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.648919 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-inventory" (OuterVolumeSpecName: "inventory") pod "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" (UID: "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.650910 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" (UID: "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.651408 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" (UID: "4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.727132 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.727167 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.727178 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.727189 5031 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.727198 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6qkv\" (UniqueName: \"kubernetes.io/projected/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-kube-api-access-r6qkv\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[5031]: I0129 09:26:09.727206 5031 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.085641 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" event={"ID":"4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc","Type":"ContainerDied","Data":"2bfb2d24e2ec5051096f70c8fe35bc7365a6c09926ca7b88110a0eed1162e26f"} Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.085686 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bfb2d24e2ec5051096f70c8fe35bc7365a6c09926ca7b88110a0eed1162e26f" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.085699 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7z526" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.190861 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts"] Jan 29 09:26:10 crc kubenswrapper[5031]: E0129 09:26:10.191349 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="registry-server" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.191403 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="registry-server" Jan 29 09:26:10 crc kubenswrapper[5031]: E0129 09:26:10.191435 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="extract-content" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.191447 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="extract-content" Jan 29 09:26:10 crc kubenswrapper[5031]: E0129 09:26:10.191473 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="registry-server" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.191486 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="registry-server" Jan 29 09:26:10 crc kubenswrapper[5031]: E0129 09:26:10.191514 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="extract-utilities" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.191527 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="extract-utilities" Jan 29 09:26:10 crc kubenswrapper[5031]: E0129 09:26:10.191611 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="extract-utilities" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.191641 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="extract-utilities" Jan 29 09:26:10 crc kubenswrapper[5031]: E0129 09:26:10.191679 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.191713 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 09:26:10 crc kubenswrapper[5031]: E0129 09:26:10.191738 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="extract-content" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.191744 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="extract-content" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.192090 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.192104 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="47da0b51-b027-4bf6-a78f-4e41228a85ed" containerName="registry-server" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.192119 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4496f045-3bd7-442d-8e81-27e272a6b14d" containerName="registry-server" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.192781 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.194891 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.194957 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j8gr" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.201475 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.201736 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.201742 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.201914 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.202101 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.202124 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.203258 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.214450 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts"] Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236686 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236746 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236770 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236801 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236829 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236867 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236884 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdzd\" (UniqueName: \"kubernetes.io/projected/05fc07ec-828a-468d-be87-1fe3925dfb0c-kube-api-access-mwdzd\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236920 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236935 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236965 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.236998 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338708 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338779 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338803 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338842 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338868 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338915 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338933 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdzd\" (UniqueName: \"kubernetes.io/projected/05fc07ec-828a-468d-be87-1fe3925dfb0c-kube-api-access-mwdzd\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338981 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.338999 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.339073 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.339119 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.341410 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.343008 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.343604 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.344204 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.344727 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.347152 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.351932 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.356042 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.359177 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdzd\" (UniqueName: \"kubernetes.io/projected/05fc07ec-828a-468d-be87-1fe3925dfb0c-kube-api-access-mwdzd\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.360791 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.367985 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:10 crc kubenswrapper[5031]: I0129 09:26:10.514493 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:26:11 crc kubenswrapper[5031]: I0129 09:26:11.041465 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:26:11 crc kubenswrapper[5031]: I0129 09:26:11.049603 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts"] Jan 29 09:26:11 crc kubenswrapper[5031]: I0129 09:26:11.092923 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" event={"ID":"05fc07ec-828a-468d-be87-1fe3925dfb0c","Type":"ContainerStarted","Data":"4bd4f37616e1b75c2f74acf6c0aa30a9b1f906830b5f081f404394c20aee8a4b"} Jan 29 09:26:12 crc kubenswrapper[5031]: I0129 09:26:12.100766 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" event={"ID":"05fc07ec-828a-468d-be87-1fe3925dfb0c","Type":"ContainerStarted","Data":"323b493c4f6d97b42d9ad074e0b65bfd7f09a7fa18f28176f8c0659d64233c59"} Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.659702 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" podStartSLOduration=12.202194081 podStartE2EDuration="12.659683928s" podCreationTimestamp="2026-01-29 09:26:10 +0000 UTC" firstStartedPulling="2026-01-29 09:26:11.041201434 +0000 UTC m=+2851.540789386" lastFinishedPulling="2026-01-29 09:26:11.498691281 +0000 UTC m=+2851.998279233" observedRunningTime="2026-01-29 09:26:12.124211371 +0000 UTC m=+2852.623799343" watchObservedRunningTime="2026-01-29 09:26:22.659683928 +0000 UTC m=+2863.159271880" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.663569 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bvkvq"] Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.665468 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.677294 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvkvq"] Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.773553 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-catalog-content\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.773647 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grcbx\" (UniqueName: \"kubernetes.io/projected/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-kube-api-access-grcbx\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.773726 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-utilities\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.875505 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-utilities\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.875592 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-catalog-content\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.875659 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grcbx\" (UniqueName: \"kubernetes.io/projected/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-kube-api-access-grcbx\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.876202 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-utilities\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.876314 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-catalog-content\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.896637 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grcbx\" (UniqueName: \"kubernetes.io/projected/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-kube-api-access-grcbx\") pod \"certified-operators-bvkvq\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:22 crc kubenswrapper[5031]: I0129 09:26:22.983597 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.277026 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tcmdn"] Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.279646 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.297613 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tcmdn"] Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.387558 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-utilities\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.387599 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swvnl\" (UniqueName: \"kubernetes.io/projected/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-kube-api-access-swvnl\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.387686 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-catalog-content\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.490426 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-catalog-content\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.490729 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-utilities\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.490755 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swvnl\" (UniqueName: \"kubernetes.io/projected/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-kube-api-access-swvnl\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.491135 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-catalog-content\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.491223 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-utilities\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.509805 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swvnl\" (UniqueName: \"kubernetes.io/projected/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-kube-api-access-swvnl\") pod \"community-operators-tcmdn\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.610860 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:23 crc kubenswrapper[5031]: I0129 09:26:23.668928 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvkvq"] Jan 29 09:26:23 crc kubenswrapper[5031]: W0129 09:26:23.675251 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5b2e693_eaad_4254_99c8_d8f8594e2b2e.slice/crio-cdfb3e9fe9e6c95cb2b88717a0a803676d1e67cec36e3ace5bf5d88d505ef1b7 WatchSource:0}: Error finding container cdfb3e9fe9e6c95cb2b88717a0a803676d1e67cec36e3ace5bf5d88d505ef1b7: Status 404 returned error can't find the container with id cdfb3e9fe9e6c95cb2b88717a0a803676d1e67cec36e3ace5bf5d88d505ef1b7 Jan 29 09:26:24 crc kubenswrapper[5031]: I0129 09:26:24.163538 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tcmdn"] Jan 29 09:26:24 crc kubenswrapper[5031]: I0129 09:26:24.187800 5031 generic.go:334] "Generic (PLEG): container finished" podID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerID="b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82" exitCode=0 Jan 29 09:26:24 crc kubenswrapper[5031]: I0129 09:26:24.187883 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvkvq" event={"ID":"c5b2e693-eaad-4254-99c8-d8f8594e2b2e","Type":"ContainerDied","Data":"b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82"} Jan 29 09:26:24 crc kubenswrapper[5031]: I0129 09:26:24.188161 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvkvq" event={"ID":"c5b2e693-eaad-4254-99c8-d8f8594e2b2e","Type":"ContainerStarted","Data":"cdfb3e9fe9e6c95cb2b88717a0a803676d1e67cec36e3ace5bf5d88d505ef1b7"} Jan 29 09:26:24 crc kubenswrapper[5031]: I0129 09:26:24.191267 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmdn" event={"ID":"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661","Type":"ContainerStarted","Data":"1a51460ef742466f70d1cc902f9b7d699c09f50e1604da49561924b3dd739cf9"} Jan 29 09:26:25 crc kubenswrapper[5031]: I0129 09:26:25.201488 5031 generic.go:334] "Generic (PLEG): container finished" podID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerID="4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3" exitCode=0 Jan 29 09:26:25 crc kubenswrapper[5031]: I0129 09:26:25.201579 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmdn" event={"ID":"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661","Type":"ContainerDied","Data":"4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3"} Jan 29 09:26:25 crc kubenswrapper[5031]: I0129 09:26:25.208086 5031 generic.go:334] "Generic (PLEG): container finished" podID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerID="17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039" exitCode=0 Jan 29 09:26:25 crc kubenswrapper[5031]: I0129 09:26:25.208135 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvkvq" event={"ID":"c5b2e693-eaad-4254-99c8-d8f8594e2b2e","Type":"ContainerDied","Data":"17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039"} Jan 29 09:26:26 crc kubenswrapper[5031]: I0129 09:26:26.221103 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmdn" event={"ID":"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661","Type":"ContainerStarted","Data":"f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3"} Jan 29 09:26:26 crc kubenswrapper[5031]: I0129 09:26:26.223357 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvkvq" event={"ID":"c5b2e693-eaad-4254-99c8-d8f8594e2b2e","Type":"ContainerStarted","Data":"2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609"} Jan 29 09:26:26 crc kubenswrapper[5031]: I0129 09:26:26.270161 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bvkvq" podStartSLOduration=2.856139737 podStartE2EDuration="4.270142487s" podCreationTimestamp="2026-01-29 09:26:22 +0000 UTC" firstStartedPulling="2026-01-29 09:26:24.189181288 +0000 UTC m=+2864.688769240" lastFinishedPulling="2026-01-29 09:26:25.603184038 +0000 UTC m=+2866.102771990" observedRunningTime="2026-01-29 09:26:26.264910186 +0000 UTC m=+2866.764498128" watchObservedRunningTime="2026-01-29 09:26:26.270142487 +0000 UTC m=+2866.769730439" Jan 29 09:26:27 crc kubenswrapper[5031]: I0129 09:26:27.237962 5031 generic.go:334] "Generic (PLEG): container finished" podID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerID="f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3" exitCode=0 Jan 29 09:26:27 crc kubenswrapper[5031]: I0129 09:26:27.238186 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmdn" event={"ID":"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661","Type":"ContainerDied","Data":"f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3"} Jan 29 09:26:28 crc kubenswrapper[5031]: I0129 09:26:28.252902 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmdn" event={"ID":"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661","Type":"ContainerStarted","Data":"11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115"} Jan 29 09:26:28 crc kubenswrapper[5031]: I0129 09:26:28.279286 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tcmdn" podStartSLOduration=2.874144235 podStartE2EDuration="5.279268165s" podCreationTimestamp="2026-01-29 09:26:23 +0000 UTC" firstStartedPulling="2026-01-29 09:26:25.203299082 +0000 UTC m=+2865.702887044" lastFinishedPulling="2026-01-29 09:26:27.608422992 +0000 UTC m=+2868.108010974" observedRunningTime="2026-01-29 09:26:28.272797542 +0000 UTC m=+2868.772385514" watchObservedRunningTime="2026-01-29 09:26:28.279268165 +0000 UTC m=+2868.778856117" Jan 29 09:26:32 crc kubenswrapper[5031]: I0129 09:26:32.985231 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:32 crc kubenswrapper[5031]: I0129 09:26:32.985903 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:33 crc kubenswrapper[5031]: I0129 09:26:33.048749 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:33 crc kubenswrapper[5031]: I0129 09:26:33.341451 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:33 crc kubenswrapper[5031]: I0129 09:26:33.389096 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvkvq"] Jan 29 09:26:33 crc kubenswrapper[5031]: I0129 09:26:33.611327 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:33 crc kubenswrapper[5031]: I0129 09:26:33.611428 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:33 crc kubenswrapper[5031]: I0129 09:26:33.659560 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:34 crc kubenswrapper[5031]: I0129 09:26:34.361216 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:35 crc kubenswrapper[5031]: I0129 09:26:35.313191 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bvkvq" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="registry-server" containerID="cri-o://2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609" gracePeriod=2 Jan 29 09:26:35 crc kubenswrapper[5031]: I0129 09:26:35.705891 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tcmdn"] Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.298506 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.331991 5031 generic.go:334] "Generic (PLEG): container finished" podID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerID="2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609" exitCode=0 Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.332072 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvkvq" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.332074 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvkvq" event={"ID":"c5b2e693-eaad-4254-99c8-d8f8594e2b2e","Type":"ContainerDied","Data":"2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609"} Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.332138 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvkvq" event={"ID":"c5b2e693-eaad-4254-99c8-d8f8594e2b2e","Type":"ContainerDied","Data":"cdfb3e9fe9e6c95cb2b88717a0a803676d1e67cec36e3ace5bf5d88d505ef1b7"} Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.332155 5031 scope.go:117] "RemoveContainer" containerID="2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.332760 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tcmdn" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="registry-server" containerID="cri-o://11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115" gracePeriod=2 Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.361814 5031 scope.go:117] "RemoveContainer" containerID="17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.384243 5031 scope.go:117] "RemoveContainer" containerID="b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.450958 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-catalog-content\") pod \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.451054 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-utilities\") pod \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.451136 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grcbx\" (UniqueName: \"kubernetes.io/projected/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-kube-api-access-grcbx\") pod \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\" (UID: \"c5b2e693-eaad-4254-99c8-d8f8594e2b2e\") " Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.452804 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-utilities" (OuterVolumeSpecName: "utilities") pod "c5b2e693-eaad-4254-99c8-d8f8594e2b2e" (UID: "c5b2e693-eaad-4254-99c8-d8f8594e2b2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.463469 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-kube-api-access-grcbx" (OuterVolumeSpecName: "kube-api-access-grcbx") pod "c5b2e693-eaad-4254-99c8-d8f8594e2b2e" (UID: "c5b2e693-eaad-4254-99c8-d8f8594e2b2e"). InnerVolumeSpecName "kube-api-access-grcbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.519578 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5b2e693-eaad-4254-99c8-d8f8594e2b2e" (UID: "c5b2e693-eaad-4254-99c8-d8f8594e2b2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.553577 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.553612 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.553623 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grcbx\" (UniqueName: \"kubernetes.io/projected/c5b2e693-eaad-4254-99c8-d8f8594e2b2e-kube-api-access-grcbx\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.567548 5031 scope.go:117] "RemoveContainer" containerID="2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609" Jan 29 09:26:36 crc kubenswrapper[5031]: E0129 09:26:36.568679 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609\": container with ID starting with 2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609 not found: ID does not exist" containerID="2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.568714 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609"} err="failed to get container status \"2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609\": rpc error: code = NotFound desc = could not find container \"2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609\": container with ID starting with 2b26395e55c3c4bdd2bde2784e7c93a34be0b3a43b844c76954ae1e1fb980609 not found: ID does not exist" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.568732 5031 scope.go:117] "RemoveContainer" containerID="17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039" Jan 29 09:26:36 crc kubenswrapper[5031]: E0129 09:26:36.569078 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039\": container with ID starting with 17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039 not found: ID does not exist" containerID="17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.569108 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039"} err="failed to get container status \"17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039\": rpc error: code = NotFound desc = could not find container \"17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039\": container with ID starting with 17ccc8d0c926354a17a58efe868537d4ecf2554ef2c5799db4e5e4b12926c039 not found: ID does not exist" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.569123 5031 scope.go:117] "RemoveContainer" containerID="b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82" Jan 29 09:26:36 crc kubenswrapper[5031]: E0129 09:26:36.569468 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82\": container with ID starting with b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82 not found: ID does not exist" containerID="b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.569512 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82"} err="failed to get container status \"b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82\": rpc error: code = NotFound desc = could not find container \"b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82\": container with ID starting with b2648097b15e71027ecec3736a9529154221e5b8ab9dbbc0890d817e69bf1e82 not found: ID does not exist" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.666601 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvkvq"] Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.676732 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bvkvq"] Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.779437 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.961600 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swvnl\" (UniqueName: \"kubernetes.io/projected/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-kube-api-access-swvnl\") pod \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.961771 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-utilities\") pod \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.961893 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-catalog-content\") pod \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\" (UID: \"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661\") " Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.962666 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-utilities" (OuterVolumeSpecName: "utilities") pod "2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" (UID: "2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:36 crc kubenswrapper[5031]: I0129 09:26:36.965414 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-kube-api-access-swvnl" (OuterVolumeSpecName: "kube-api-access-swvnl") pod "2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" (UID: "2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661"). InnerVolumeSpecName "kube-api-access-swvnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.064096 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swvnl\" (UniqueName: \"kubernetes.io/projected/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-kube-api-access-swvnl\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.064138 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.345095 5031 generic.go:334] "Generic (PLEG): container finished" podID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerID="11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115" exitCode=0 Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.345175 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmdn" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.345195 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmdn" event={"ID":"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661","Type":"ContainerDied","Data":"11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115"} Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.345551 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmdn" event={"ID":"2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661","Type":"ContainerDied","Data":"1a51460ef742466f70d1cc902f9b7d699c09f50e1604da49561924b3dd739cf9"} Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.345583 5031 scope.go:117] "RemoveContainer" containerID="11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.364590 5031 scope.go:117] "RemoveContainer" containerID="f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.388939 5031 scope.go:117] "RemoveContainer" containerID="4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.408449 5031 scope.go:117] "RemoveContainer" containerID="11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115" Jan 29 09:26:37 crc kubenswrapper[5031]: E0129 09:26:37.408879 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115\": container with ID starting with 11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115 not found: ID does not exist" containerID="11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.408923 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115"} err="failed to get container status \"11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115\": rpc error: code = NotFound desc = could not find container \"11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115\": container with ID starting with 11a8471a478c9d182d2b3028d5c59a46d8b175e9a07605debc174a5a26cdf115 not found: ID does not exist" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.408950 5031 scope.go:117] "RemoveContainer" containerID="f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3" Jan 29 09:26:37 crc kubenswrapper[5031]: E0129 09:26:37.409218 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3\": container with ID starting with f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3 not found: ID does not exist" containerID="f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.409260 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3"} err="failed to get container status \"f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3\": rpc error: code = NotFound desc = could not find container \"f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3\": container with ID starting with f45bfdebff4af0ef70c887c04b82203a68cf62aea9a2a0b0a485d4a4c58b06c3 not found: ID does not exist" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.409280 5031 scope.go:117] "RemoveContainer" containerID="4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3" Jan 29 09:26:37 crc kubenswrapper[5031]: E0129 09:26:37.409754 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3\": container with ID starting with 4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3 not found: ID does not exist" containerID="4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.409779 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3"} err="failed to get container status \"4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3\": rpc error: code = NotFound desc = could not find container \"4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3\": container with ID starting with 4153b17b06d6d96b80812f65c9be340e03bf450c62e7aaa22802fbcd07fb34a3 not found: ID does not exist" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.438451 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" (UID: "2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.471139 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.689069 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tcmdn"] Jan 29 09:26:37 crc kubenswrapper[5031]: I0129 09:26:37.698470 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tcmdn"] Jan 29 09:26:38 crc kubenswrapper[5031]: I0129 09:26:38.294212 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" path="/var/lib/kubelet/pods/2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661/volumes" Jan 29 09:26:38 crc kubenswrapper[5031]: I0129 09:26:38.294867 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" path="/var/lib/kubelet/pods/c5b2e693-eaad-4254-99c8-d8f8594e2b2e/volumes" Jan 29 09:26:38 crc kubenswrapper[5031]: I0129 09:26:38.493774 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:26:38 crc kubenswrapper[5031]: I0129 09:26:38.493840 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:08 crc kubenswrapper[5031]: I0129 09:27:08.494176 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:27:08 crc kubenswrapper[5031]: I0129 09:27:08.494801 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.493833 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.494515 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.494583 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.495594 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.495692 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" gracePeriod=600 Jan 29 09:27:38 crc kubenswrapper[5031]: E0129 09:27:38.630544 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.851938 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" exitCode=0 Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.851988 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b"} Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.852028 5031 scope.go:117] "RemoveContainer" containerID="6ba1b771933fda7cf3c2cbf7b45f2473fcaa9f1b15e8d86548eef69a32f57643" Jan 29 09:27:38 crc kubenswrapper[5031]: I0129 09:27:38.852934 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:27:38 crc kubenswrapper[5031]: E0129 09:27:38.853288 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:27:51 crc kubenswrapper[5031]: I0129 09:27:51.282319 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:27:51 crc kubenswrapper[5031]: E0129 09:27:51.283056 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:28:05 crc kubenswrapper[5031]: I0129 09:28:05.282839 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:28:05 crc kubenswrapper[5031]: E0129 09:28:05.284086 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:28:19 crc kubenswrapper[5031]: I0129 09:28:19.282345 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:28:19 crc kubenswrapper[5031]: E0129 09:28:19.283231 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:28:30 crc kubenswrapper[5031]: I0129 09:28:30.287498 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:28:30 crc kubenswrapper[5031]: E0129 09:28:30.288255 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:28:41 crc kubenswrapper[5031]: I0129 09:28:41.374234 5031 generic.go:334] "Generic (PLEG): container finished" podID="05fc07ec-828a-468d-be87-1fe3925dfb0c" containerID="323b493c4f6d97b42d9ad074e0b65bfd7f09a7fa18f28176f8c0659d64233c59" exitCode=0 Jan 29 09:28:41 crc kubenswrapper[5031]: I0129 09:28:41.374317 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" event={"ID":"05fc07ec-828a-468d-be87-1fe3925dfb0c","Type":"ContainerDied","Data":"323b493c4f6d97b42d9ad074e0b65bfd7f09a7fa18f28176f8c0659d64233c59"} Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.282231 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:28:42 crc kubenswrapper[5031]: E0129 09:28:42.282946 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.776562 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881480 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-custom-ceph-combined-ca-bundle\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881645 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-0\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881684 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-inventory\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881750 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881804 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ssh-key-openstack-edpm-ipam\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881880 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-1\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881914 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-extra-config-0\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881963 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph-nova-0\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.881991 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-1\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.882047 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-0\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.882080 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwdzd\" (UniqueName: \"kubernetes.io/projected/05fc07ec-828a-468d-be87-1fe3925dfb0c-kube-api-access-mwdzd\") pod \"05fc07ec-828a-468d-be87-1fe3925dfb0c\" (UID: \"05fc07ec-828a-468d-be87-1fe3925dfb0c\") " Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.889675 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fc07ec-828a-468d-be87-1fe3925dfb0c-kube-api-access-mwdzd" (OuterVolumeSpecName: "kube-api-access-mwdzd") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "kube-api-access-mwdzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.889816 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph" (OuterVolumeSpecName: "ceph") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.897599 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.910603 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.915642 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.916428 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.917760 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.918191 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-inventory" (OuterVolumeSpecName: "inventory") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.918785 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.924641 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.929261 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "05fc07ec-828a-468d-be87-1fe3925dfb0c" (UID: "05fc07ec-828a-468d-be87-1fe3925dfb0c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.985410 5031 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.985733 5031 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.985858 5031 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.985945 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.986024 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.986110 5031 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.986214 5031 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.986350 5031 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/05fc07ec-828a-468d-be87-1fe3925dfb0c-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.986463 5031 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.986555 5031 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05fc07ec-828a-468d-be87-1fe3925dfb0c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:42 crc kubenswrapper[5031]: I0129 09:28:42.986634 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwdzd\" (UniqueName: \"kubernetes.io/projected/05fc07ec-828a-468d-be87-1fe3925dfb0c-kube-api-access-mwdzd\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:43 crc kubenswrapper[5031]: I0129 09:28:43.397252 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" event={"ID":"05fc07ec-828a-468d-be87-1fe3925dfb0c","Type":"ContainerDied","Data":"4bd4f37616e1b75c2f74acf6c0aa30a9b1f906830b5f081f404394c20aee8a4b"} Jan 29 09:28:43 crc kubenswrapper[5031]: I0129 09:28:43.397716 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd4f37616e1b75c2f74acf6c0aa30a9b1f906830b5f081f404394c20aee8a4b" Jan 29 09:28:43 crc kubenswrapper[5031]: I0129 09:28:43.397450 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts" Jan 29 09:28:54 crc kubenswrapper[5031]: I0129 09:28:54.283825 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:28:54 crc kubenswrapper[5031]: E0129 09:28:54.284702 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.448025 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 09:28:57 crc kubenswrapper[5031]: E0129 09:28:57.448827 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fc07ec-828a-468d-be87-1fe3925dfb0c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.448850 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fc07ec-828a-468d-be87-1fe3925dfb0c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 29 09:28:57 crc kubenswrapper[5031]: E0129 09:28:57.448873 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="registry-server" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.448881 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="registry-server" Jan 29 09:28:57 crc kubenswrapper[5031]: E0129 09:28:57.448902 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="registry-server" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.448912 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="registry-server" Jan 29 09:28:57 crc kubenswrapper[5031]: E0129 09:28:57.448936 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="extract-content" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.448944 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="extract-content" Jan 29 09:28:57 crc kubenswrapper[5031]: E0129 09:28:57.448954 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="extract-content" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.448962 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="extract-content" Jan 29 09:28:57 crc kubenswrapper[5031]: E0129 09:28:57.448978 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="extract-utilities" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.448987 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="extract-utilities" Jan 29 09:28:57 crc kubenswrapper[5031]: E0129 09:28:57.448995 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="extract-utilities" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.449003 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="extract-utilities" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.449223 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb9cfa3-ce21-4cfe-8710-7d36ac3f8661" containerName="registry-server" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.449239 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b2e693-eaad-4254-99c8-d8f8594e2b2e" containerName="registry-server" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.449259 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fc07ec-828a-468d-be87-1fe3925dfb0c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.450549 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.457174 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.458594 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.458881 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.531917 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.534544 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.536927 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.552057 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556061 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556105 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556136 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556157 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556181 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9b7w\" (UniqueName: \"kubernetes.io/projected/1fae57c0-f6a0-4239-b513-e37aec4f4065-kube-api-access-n9b7w\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556213 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-run\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556251 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556267 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fae57c0-f6a0-4239-b513-e37aec4f4065-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556289 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556310 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556330 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556355 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556390 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556408 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556435 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.556463 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657813 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657860 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657888 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657909 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec4354fa-4aef-4401-befd-f3a59619869e-ceph\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657923 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-run\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657941 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657961 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.657981 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658001 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-scripts\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658000 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658084 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658103 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9b7w\" (UniqueName: \"kubernetes.io/projected/1fae57c0-f6a0-4239-b513-e37aec4f4065-kube-api-access-n9b7w\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658228 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658278 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-run\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658301 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-run\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658322 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-dev\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658387 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658439 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphc7\" (UniqueName: \"kubernetes.io/projected/ec4354fa-4aef-4401-befd-f3a59619869e-kube-api-access-sphc7\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658644 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658681 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fae57c0-f6a0-4239-b513-e37aec4f4065-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658710 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658797 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658932 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.658774 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659332 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659390 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659440 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659512 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659537 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-sys\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659539 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659562 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659591 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659633 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-lib-modules\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659667 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659714 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659746 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659808 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659869 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-config-data\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659938 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659945 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.659967 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.660192 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1fae57c0-f6a0-4239-b513-e37aec4f4065-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.666159 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.667994 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.670900 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.671002 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fae57c0-f6a0-4239-b513-e37aec4f4065-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.675888 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fae57c0-f6a0-4239-b513-e37aec4f4065-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.690983 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9b7w\" (UniqueName: \"kubernetes.io/projected/1fae57c0-f6a0-4239-b513-e37aec4f4065-kube-api-access-n9b7w\") pod \"cinder-volume-volume1-0\" (UID: \"1fae57c0-f6a0-4239-b513-e37aec4f4065\") " pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.833982 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836004 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836057 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-config-data\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836092 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836164 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836191 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec4354fa-4aef-4401-befd-f3a59619869e-ceph\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836210 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-run\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836232 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836263 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-scripts\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836307 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-dev\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836329 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836355 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sphc7\" (UniqueName: \"kubernetes.io/projected/ec4354fa-4aef-4401-befd-f3a59619869e-kube-api-access-sphc7\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836412 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836504 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836616 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836722 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-sys\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836783 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-lib-modules\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.836828 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.837017 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.837964 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.838070 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.838109 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-sys\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.838144 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-lib-modules\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.838187 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-dev\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.838238 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.838482 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-run\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.838528 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ec4354fa-4aef-4401-befd-f3a59619869e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.841683 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-scripts\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.844064 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.857415 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec4354fa-4aef-4401-befd-f3a59619869e-ceph\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.860946 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-config-data\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.866406 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec4354fa-4aef-4401-befd-f3a59619869e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:57 crc kubenswrapper[5031]: I0129 09:28:57.876842 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sphc7\" (UniqueName: \"kubernetes.io/projected/ec4354fa-4aef-4401-befd-f3a59619869e-kube-api-access-sphc7\") pod \"cinder-backup-0\" (UID: \"ec4354fa-4aef-4401-befd-f3a59619869e\") " pod="openstack/cinder-backup-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.151036 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.262661 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56bcfc8bf7-tfqs6"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.264134 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.266122 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-8m2hj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.266338 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.266472 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.271865 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.307531 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56bcfc8bf7-tfqs6"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.371925 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.373465 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.377346 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.377582 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.377699 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.381518 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qn4rn" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.397300 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.414495 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-2knmh"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.416033 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.429187 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2knmh"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.447410 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-ba01-account-create-update-j9tfj"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.448870 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.451323 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-scripts\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.451428 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eda84d3-0c58-4449-80e1-5198ecb37e22-logs\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.451536 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-config-data\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.451596 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5eda84d3-0c58-4449-80e1-5198ecb37e22-horizon-secret-key\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.451627 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slvh2\" (UniqueName: \"kubernetes.io/projected/5eda84d3-0c58-4449-80e1-5198ecb37e22-kube-api-access-slvh2\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.453803 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.479415 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-ba01-account-create-update-j9tfj"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.509499 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.512007 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.523395 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.523606 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553142 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553197 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5eda84d3-0c58-4449-80e1-5198ecb37e22-horizon-secret-key\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553224 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvl9\" (UniqueName: \"kubernetes.io/projected/1c135329-1c87-495b-affc-91c0520b26ba-kube-api-access-bjvl9\") pod \"manila-ba01-account-create-update-j9tfj\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553243 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slvh2\" (UniqueName: \"kubernetes.io/projected/5eda84d3-0c58-4449-80e1-5198ecb37e22-kube-api-access-slvh2\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553271 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c135329-1c87-495b-affc-91c0520b26ba-operator-scripts\") pod \"manila-ba01-account-create-update-j9tfj\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553292 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bx2\" (UniqueName: \"kubernetes.io/projected/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-kube-api-access-f4bx2\") pod \"manila-db-create-2knmh\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553327 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-config-data\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553385 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-ceph\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553407 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-scripts\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553444 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553463 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553486 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eda84d3-0c58-4449-80e1-5198ecb37e22-logs\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553505 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-scripts\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553525 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553565 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-operator-scripts\") pod \"manila-db-create-2knmh\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553595 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dkfs\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-kube-api-access-5dkfs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.553613 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-logs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.554168 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eda84d3-0c58-4449-80e1-5198ecb37e22-logs\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.554982 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-config-data\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.555488 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-scripts\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.557538 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-config-data\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.576592 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5eda84d3-0c58-4449-80e1-5198ecb37e22-horizon-secret-key\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.576687 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.580776 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slvh2\" (UniqueName: \"kubernetes.io/projected/5eda84d3-0c58-4449-80e1-5198ecb37e22-kube-api-access-slvh2\") pod \"horizon-56bcfc8bf7-tfqs6\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.605843 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.616260 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:28:58 crc kubenswrapper[5031]: E0129 09:28:58.617814 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-5dkfs logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="72e35f66-7dbf-403a-926a-47495e147bd3" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.645004 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54d75c5b5c-k4vm8"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.646900 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.657772 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.657862 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.657893 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvl9\" (UniqueName: \"kubernetes.io/projected/1c135329-1c87-495b-affc-91c0520b26ba-kube-api-access-bjvl9\") pod \"manila-ba01-account-create-update-j9tfj\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.657925 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.657952 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kb6\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-kube-api-access-n7kb6\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.657983 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c135329-1c87-495b-affc-91c0520b26ba-operator-scripts\") pod \"manila-ba01-account-create-update-j9tfj\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658008 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bx2\" (UniqueName: \"kubernetes.io/projected/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-kube-api-access-f4bx2\") pod \"manila-db-create-2knmh\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658043 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-config-data\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658069 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658113 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-ceph\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658143 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-logs\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658165 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658206 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658231 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658264 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-scripts\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658290 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658332 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-operator-scripts\") pod \"manila-db-create-2knmh\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658386 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658415 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dkfs\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-kube-api-access-5dkfs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658438 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658460 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-logs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.658482 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.659563 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-operator-scripts\") pod \"manila-db-create-2knmh\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.659981 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.660383 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c135329-1c87-495b-affc-91c0520b26ba-operator-scripts\") pod \"manila-ba01-account-create-update-j9tfj\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.661501 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.662701 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-logs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.663981 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.673756 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-scripts\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.687695 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvl9\" (UniqueName: \"kubernetes.io/projected/1c135329-1c87-495b-affc-91c0520b26ba-kube-api-access-bjvl9\") pod \"manila-ba01-account-create-update-j9tfj\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.687777 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54d75c5b5c-k4vm8"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.688039 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.689892 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-ceph\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.693067 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-config-data\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.693614 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dkfs\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-kube-api-access-5dkfs\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.703004 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.707267 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bx2\" (UniqueName: \"kubernetes.io/projected/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-kube-api-access-f4bx2\") pod \"manila-db-create-2knmh\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.743269 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " pod="openstack/glance-default-external-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761476 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761610 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761660 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761700 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9752d31-4851-463a-9d9c-f27283dd5f54-horizon-secret-key\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761753 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761799 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-config-data\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761838 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hcs4\" (UniqueName: \"kubernetes.io/projected/b9752d31-4851-463a-9d9c-f27283dd5f54-kube-api-access-5hcs4\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761889 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761926 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kb6\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-kube-api-access-n7kb6\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.761969 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-scripts\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.762012 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.762071 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-logs\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.762113 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.762161 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9752d31-4851-463a-9d9c-f27283dd5f54-logs\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.762815 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.767665 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.768150 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.769410 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-logs\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.770648 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.788000 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2knmh" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.791556 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.798477 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.805066 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.805965 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.810627 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.816045 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kb6\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-kube-api-access-n7kb6\") pod \"glance-default-internal-api-0\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.865639 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9752d31-4851-463a-9d9c-f27283dd5f54-horizon-secret-key\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.865706 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-config-data\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.865729 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hcs4\" (UniqueName: \"kubernetes.io/projected/b9752d31-4851-463a-9d9c-f27283dd5f54-kube-api-access-5hcs4\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.865773 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-scripts\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.865833 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9752d31-4851-463a-9d9c-f27283dd5f54-logs\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.866357 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9752d31-4851-463a-9d9c-f27283dd5f54-logs\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.876878 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-config-data\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.877420 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-scripts\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.882945 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.887881 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9752d31-4851-463a-9d9c-f27283dd5f54-horizon-secret-key\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.922480 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hcs4\" (UniqueName: \"kubernetes.io/projected/b9752d31-4851-463a-9d9c-f27283dd5f54-kube-api-access-5hcs4\") pod \"horizon-54d75c5b5c-k4vm8\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:58 crc kubenswrapper[5031]: I0129 09:28:58.943615 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.136155 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.276624 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56bcfc8bf7-tfqs6"] Jan 29 09:28:59 crc kubenswrapper[5031]: W0129 09:28:59.431529 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eda84d3_0c58_4449_80e1_5198ecb37e22.slice/crio-81840adab319d5b10d20cba8c2abfae1c431ba7a89b651f0d07f5c2fd6bfb6c0 WatchSource:0}: Error finding container 81840adab319d5b10d20cba8c2abfae1c431ba7a89b651f0d07f5c2fd6bfb6c0: Status 404 returned error can't find the container with id 81840adab319d5b10d20cba8c2abfae1c431ba7a89b651f0d07f5c2fd6bfb6c0 Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.547619 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2knmh"] Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.584979 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2knmh" event={"ID":"6022f9c4-3a0d-4f89-881d-b6a17970ac9b","Type":"ContainerStarted","Data":"caef8a54960b33fd796ec33c233d5045ba5e1df593dc852a43dba44cfd6e73e1"} Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.589971 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1fae57c0-f6a0-4239-b513-e37aec4f4065","Type":"ContainerStarted","Data":"164e70f2cdff5213e2b3ac3f920533c6f94ef4faf69d6af47da1c61577568530"} Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.594116 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56bcfc8bf7-tfqs6" event={"ID":"5eda84d3-0c58-4449-80e1-5198ecb37e22","Type":"ContainerStarted","Data":"81840adab319d5b10d20cba8c2abfae1c431ba7a89b651f0d07f5c2fd6bfb6c0"} Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.595121 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.595755 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ec4354fa-4aef-4401-befd-f3a59619869e","Type":"ContainerStarted","Data":"c8a2954b427a0be65a8f74c1e425cfa6ef4b4563a3cda81aca5799f080f4732c"} Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.612061 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.625210 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688246 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-combined-ca-bundle\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688585 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dkfs\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-kube-api-access-5dkfs\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688657 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-ceph\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688764 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-httpd-run\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688787 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-config-data\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688823 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-logs\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688925 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688958 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-public-tls-certs\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.688989 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-scripts\") pod \"72e35f66-7dbf-403a-926a-47495e147bd3\" (UID: \"72e35f66-7dbf-403a-926a-47495e147bd3\") " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.689009 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.689262 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-logs" (OuterVolumeSpecName: "logs") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.690019 5031 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.690049 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e35f66-7dbf-403a-926a-47495e147bd3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.690507 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54d75c5b5c-k4vm8"] Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.696873 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.697515 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.698188 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-kube-api-access-5dkfs" (OuterVolumeSpecName: "kube-api-access-5dkfs") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "kube-api-access-5dkfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.698595 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.700239 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-config-data" (OuterVolumeSpecName: "config-data") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.700604 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-ceph" (OuterVolumeSpecName: "ceph") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.700664 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-scripts" (OuterVolumeSpecName: "scripts") pod "72e35f66-7dbf-403a-926a-47495e147bd3" (UID: "72e35f66-7dbf-403a-926a-47495e147bd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.702582 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-ba01-account-create-update-j9tfj"] Jan 29 09:28:59 crc kubenswrapper[5031]: W0129 09:28:59.703711 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9752d31_4851_463a_9d9c_f27283dd5f54.slice/crio-84a6743538678127fda32b60cc1061735c83f789ced6d05620f5985a641dc20c WatchSource:0}: Error finding container 84a6743538678127fda32b60cc1061735c83f789ced6d05620f5985a641dc20c: Status 404 returned error can't find the container with id 84a6743538678127fda32b60cc1061735c83f789ced6d05620f5985a641dc20c Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.794038 5031 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.794339 5031 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.794349 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.794358 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.794382 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dkfs\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-kube-api-access-5dkfs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.794391 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72e35f66-7dbf-403a-926a-47495e147bd3-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.794398 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e35f66-7dbf-403a-926a-47495e147bd3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.835748 5031 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 09:28:59 crc kubenswrapper[5031]: I0129 09:28:59.897266 5031 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.611801 5031 generic.go:334] "Generic (PLEG): container finished" podID="6022f9c4-3a0d-4f89-881d-b6a17970ac9b" containerID="47856d2b5ccfd3cd7354ff707a81e3b60c816732e50e53884cc8c7984aa4d65b" exitCode=0 Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.612329 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2knmh" event={"ID":"6022f9c4-3a0d-4f89-881d-b6a17970ac9b","Type":"ContainerDied","Data":"47856d2b5ccfd3cd7354ff707a81e3b60c816732e50e53884cc8c7984aa4d65b"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.618888 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1fae57c0-f6a0-4239-b513-e37aec4f4065","Type":"ContainerStarted","Data":"41793d1e56514d7ef6ae10c8568e44621957a8610d4b91baf87f51ae2a402e4f"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.618923 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1fae57c0-f6a0-4239-b513-e37aec4f4065","Type":"ContainerStarted","Data":"7ccf765ba8562d89370c755800fc1dfa0c2d1ba19b30ede55ecb69ab25ed78d5"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.624794 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a612e054-14dc-48e1-b60a-4f75bbc44de2","Type":"ContainerStarted","Data":"a6b1402009ec38a38a6efa324f59d9049bc0cf20abba4fb742c0d4ffe7aeee18"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.624854 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a612e054-14dc-48e1-b60a-4f75bbc44de2","Type":"ContainerStarted","Data":"2805a55d1d6d8c357ee13a8593974f2281a304365e74d4ab679488a95b4e3893"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.629514 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ba01-account-create-update-j9tfj" event={"ID":"1c135329-1c87-495b-affc-91c0520b26ba","Type":"ContainerStarted","Data":"311cc19d968bed58031f1a386154be57017827571baf518abc59dafda135a65c"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.629633 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ba01-account-create-update-j9tfj" event={"ID":"1c135329-1c87-495b-affc-91c0520b26ba","Type":"ContainerStarted","Data":"0cafc69722cb7ff781919b0a2c7ca45cc0acd3d420d7b60b2efb1dc1e40a153a"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.637636 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ec4354fa-4aef-4401-befd-f3a59619869e","Type":"ContainerStarted","Data":"65c0a1a27851c1b284f760750b4f5ad3752f54f2763460a4ef74024862971b40"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.645616 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d75c5b5c-k4vm8" event={"ID":"b9752d31-4851-463a-9d9c-f27283dd5f54","Type":"ContainerStarted","Data":"84a6743538678127fda32b60cc1061735c83f789ced6d05620f5985a641dc20c"} Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.645635 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.653730 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.747377511 podStartE2EDuration="3.653712252s" podCreationTimestamp="2026-01-29 09:28:57 +0000 UTC" firstStartedPulling="2026-01-29 09:28:58.56518717 +0000 UTC m=+3019.064775122" lastFinishedPulling="2026-01-29 09:28:59.471521911 +0000 UTC m=+3019.971109863" observedRunningTime="2026-01-29 09:29:00.652775656 +0000 UTC m=+3021.152363618" watchObservedRunningTime="2026-01-29 09:29:00.653712252 +0000 UTC m=+3021.153300204" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.681638 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-ba01-account-create-update-j9tfj" podStartSLOduration=2.681617168 podStartE2EDuration="2.681617168s" podCreationTimestamp="2026-01-29 09:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:00.681207917 +0000 UTC m=+3021.180795879" watchObservedRunningTime="2026-01-29 09:29:00.681617168 +0000 UTC m=+3021.181205120" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.783304 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.809356 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.826451 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.828500 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.833987 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.834297 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.844912 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922285 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922351 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-ceph\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922425 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-logs\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922456 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922500 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922549 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922585 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922635 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:00 crc kubenswrapper[5031]: I0129 09:29:00.922672 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpmk7\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-kube-api-access-tpmk7\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024120 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024159 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-ceph\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024198 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-logs\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024220 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024252 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024287 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024315 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024361 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.024432 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpmk7\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-kube-api-access-tpmk7\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.025205 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-logs\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.025522 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.025879 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.029176 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.029642 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-ceph\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.034619 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.040446 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.046151 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.048224 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpmk7\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-kube-api-access-tpmk7\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.130125 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.159072 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.160217 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.199953 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.224914 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54d75c5b5c-k4vm8"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.258422 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5df6bb9c74-nlm69"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.260518 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.268420 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.285723 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.328240 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5df6bb9c74-nlm69"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.363935 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56bcfc8bf7-tfqs6"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.365001 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-tls-certs\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.365086 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-combined-ca-bundle\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.365116 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-secret-key\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.365149 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-scripts\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.365181 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-config-data\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.365204 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6h2p\" (UniqueName: \"kubernetes.io/projected/a88f18bd-1a15-4a57-8ee9-4457fbd15905-kube-api-access-b6h2p\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.365241 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88f18bd-1a15-4a57-8ee9-4457fbd15905-logs\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.381251 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.390491 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b47759886-4vh7j"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.392113 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.399937 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b47759886-4vh7j"] Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.472232 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-config-data\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.472862 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82p5\" (UniqueName: \"kubernetes.io/projected/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-kube-api-access-m82p5\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.472992 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6h2p\" (UniqueName: \"kubernetes.io/projected/a88f18bd-1a15-4a57-8ee9-4457fbd15905-kube-api-access-b6h2p\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473236 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88f18bd-1a15-4a57-8ee9-4457fbd15905-logs\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473312 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-config-data\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473439 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-scripts\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473507 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-tls-certs\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473565 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-combined-ca-bundle\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473679 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-horizon-secret-key\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473723 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-horizon-tls-certs\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473883 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-logs\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.473923 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-combined-ca-bundle\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.474030 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-secret-key\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.474136 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-scripts\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.474140 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-config-data\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.474936 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88f18bd-1a15-4a57-8ee9-4457fbd15905-logs\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.475055 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-scripts\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.483042 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-secret-key\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.483978 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-combined-ca-bundle\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.489918 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-tls-certs\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.494709 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6h2p\" (UniqueName: \"kubernetes.io/projected/a88f18bd-1a15-4a57-8ee9-4457fbd15905-kube-api-access-b6h2p\") pod \"horizon-5df6bb9c74-nlm69\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.577148 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-config-data\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.577578 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-scripts\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.577621 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-combined-ca-bundle\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.577671 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-horizon-secret-key\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.577699 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-horizon-tls-certs\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.577775 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-logs\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.577888 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m82p5\" (UniqueName: \"kubernetes.io/projected/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-kube-api-access-m82p5\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.578560 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-scripts\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.580043 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-config-data\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.580337 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-logs\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.600272 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-horizon-tls-certs\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.601118 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-combined-ca-bundle\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.607931 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.608660 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-horizon-secret-key\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.613326 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m82p5\" (UniqueName: \"kubernetes.io/projected/7cfc507f-5595-4ff5-9f5f-8942dc5468dc-kube-api-access-m82p5\") pod \"horizon-b47759886-4vh7j\" (UID: \"7cfc507f-5595-4ff5-9f5f-8942dc5468dc\") " pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.659288 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a612e054-14dc-48e1-b60a-4f75bbc44de2","Type":"ContainerStarted","Data":"6793370f884f3e40faa40bc10c3efd66ef06d1804df5aef26709616fed73e3bf"} Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.659508 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-log" containerID="cri-o://a6b1402009ec38a38a6efa324f59d9049bc0cf20abba4fb742c0d4ffe7aeee18" gracePeriod=30 Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.659884 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-httpd" containerID="cri-o://6793370f884f3e40faa40bc10c3efd66ef06d1804df5aef26709616fed73e3bf" gracePeriod=30 Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.675815 5031 generic.go:334] "Generic (PLEG): container finished" podID="1c135329-1c87-495b-affc-91c0520b26ba" containerID="311cc19d968bed58031f1a386154be57017827571baf518abc59dafda135a65c" exitCode=0 Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.675879 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ba01-account-create-update-j9tfj" event={"ID":"1c135329-1c87-495b-affc-91c0520b26ba","Type":"ContainerDied","Data":"311cc19d968bed58031f1a386154be57017827571baf518abc59dafda135a65c"} Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.690651 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ec4354fa-4aef-4401-befd-f3a59619869e","Type":"ContainerStarted","Data":"bf8bd7516b9a12f0d71ab6880134132ea53f740634ca516b57b0ff32cc6ffdc7"} Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.692956 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.692926147 podStartE2EDuration="3.692926147s" podCreationTimestamp="2026-01-29 09:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:01.680452303 +0000 UTC m=+3022.180040265" watchObservedRunningTime="2026-01-29 09:29:01.692926147 +0000 UTC m=+3022.192514099" Jan 29 09:29:01 crc kubenswrapper[5031]: I0129 09:29:01.730809 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.000407 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.168632444 podStartE2EDuration="5.00038041s" podCreationTimestamp="2026-01-29 09:28:57 +0000 UTC" firstStartedPulling="2026-01-29 09:28:59.187983948 +0000 UTC m=+3019.687571900" lastFinishedPulling="2026-01-29 09:29:00.019731914 +0000 UTC m=+3020.519319866" observedRunningTime="2026-01-29 09:29:01.754025371 +0000 UTC m=+3022.253613333" watchObservedRunningTime="2026-01-29 09:29:02.00038041 +0000 UTC m=+3022.499968372" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.003763 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.198241 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2knmh" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.306141 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4bx2\" (UniqueName: \"kubernetes.io/projected/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-kube-api-access-f4bx2\") pod \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.306606 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-operator-scripts\") pod \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\" (UID: \"6022f9c4-3a0d-4f89-881d-b6a17970ac9b\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.308348 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6022f9c4-3a0d-4f89-881d-b6a17970ac9b" (UID: "6022f9c4-3a0d-4f89-881d-b6a17970ac9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.321873 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-kube-api-access-f4bx2" (OuterVolumeSpecName: "kube-api-access-f4bx2") pod "6022f9c4-3a0d-4f89-881d-b6a17970ac9b" (UID: "6022f9c4-3a0d-4f89-881d-b6a17970ac9b"). InnerVolumeSpecName "kube-api-access-f4bx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.338983 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e35f66-7dbf-403a-926a-47495e147bd3" path="/var/lib/kubelet/pods/72e35f66-7dbf-403a-926a-47495e147bd3/volumes" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.409759 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5df6bb9c74-nlm69"] Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.411321 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4bx2\" (UniqueName: \"kubernetes.io/projected/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-kube-api-access-f4bx2\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.411357 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6022f9c4-3a0d-4f89-881d-b6a17970ac9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.712704 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad","Type":"ContainerStarted","Data":"4b3b8b2bb78f45cfebd5d9f92873de2223c9de9e2d5ba6804be94e5a13416798"} Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.715735 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b47759886-4vh7j"] Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.715894 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2knmh" event={"ID":"6022f9c4-3a0d-4f89-881d-b6a17970ac9b","Type":"ContainerDied","Data":"caef8a54960b33fd796ec33c233d5045ba5e1df593dc852a43dba44cfd6e73e1"} Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.716000 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caef8a54960b33fd796ec33c233d5045ba5e1df593dc852a43dba44cfd6e73e1" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.717668 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2knmh" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.720248 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5df6bb9c74-nlm69" event={"ID":"a88f18bd-1a15-4a57-8ee9-4457fbd15905","Type":"ContainerStarted","Data":"946ffc8ecac4e18d4794fd6107bffa15125486b231b219ab22a081d2ba3baffe"} Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.722857 5031 generic.go:334] "Generic (PLEG): container finished" podID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerID="6793370f884f3e40faa40bc10c3efd66ef06d1804df5aef26709616fed73e3bf" exitCode=143 Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.722891 5031 generic.go:334] "Generic (PLEG): container finished" podID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerID="a6b1402009ec38a38a6efa324f59d9049bc0cf20abba4fb742c0d4ffe7aeee18" exitCode=143 Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.723056 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a612e054-14dc-48e1-b60a-4f75bbc44de2","Type":"ContainerDied","Data":"6793370f884f3e40faa40bc10c3efd66ef06d1804df5aef26709616fed73e3bf"} Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.723092 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a612e054-14dc-48e1-b60a-4f75bbc44de2","Type":"ContainerDied","Data":"a6b1402009ec38a38a6efa324f59d9049bc0cf20abba4fb742c0d4ffe7aeee18"} Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.723103 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a612e054-14dc-48e1-b60a-4f75bbc44de2","Type":"ContainerDied","Data":"2805a55d1d6d8c357ee13a8593974f2281a304365e74d4ab679488a95b4e3893"} Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.723114 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2805a55d1d6d8c357ee13a8593974f2281a304365e74d4ab679488a95b4e3893" Jan 29 09:29:02 crc kubenswrapper[5031]: W0129 09:29:02.762601 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cfc507f_5595_4ff5_9f5f_8942dc5468dc.slice/crio-35abb34cfefe5153ad2e03b2c39bbfb61ec1406be8c09c70ffc15b73a9554f60 WatchSource:0}: Error finding container 35abb34cfefe5153ad2e03b2c39bbfb61ec1406be8c09c70ffc15b73a9554f60: Status 404 returned error can't find the container with id 35abb34cfefe5153ad2e03b2c39bbfb61ec1406be8c09c70ffc15b73a9554f60 Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.835113 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.877396 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919432 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-scripts\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919519 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-config-data\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919540 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-ceph\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919612 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-internal-tls-certs\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919651 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-httpd-run\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919725 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7kb6\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-kube-api-access-n7kb6\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919809 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-logs\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919825 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.919848 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-combined-ca-bundle\") pod \"a612e054-14dc-48e1-b60a-4f75bbc44de2\" (UID: \"a612e054-14dc-48e1-b60a-4f75bbc44de2\") " Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.920831 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-logs" (OuterVolumeSpecName: "logs") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.921060 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.932610 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-kube-api-access-n7kb6" (OuterVolumeSpecName: "kube-api-access-n7kb6") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "kube-api-access-n7kb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.932834 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.935272 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-ceph" (OuterVolumeSpecName: "ceph") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:02 crc kubenswrapper[5031]: I0129 09:29:02.962591 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-scripts" (OuterVolumeSpecName: "scripts") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.021733 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.021777 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.021786 5031 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.021796 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7kb6\" (UniqueName: \"kubernetes.io/projected/a612e054-14dc-48e1-b60a-4f75bbc44de2-kube-api-access-n7kb6\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.021806 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a612e054-14dc-48e1-b60a-4f75bbc44de2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.021828 5031 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.078692 5031 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.092337 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.125541 5031 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.125858 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.133447 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.151984 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.152088 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-config-data" (OuterVolumeSpecName: "config-data") pod "a612e054-14dc-48e1-b60a-4f75bbc44de2" (UID: "a612e054-14dc-48e1-b60a-4f75bbc44de2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.178239 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.234304 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c135329-1c87-495b-affc-91c0520b26ba-operator-scripts\") pod \"1c135329-1c87-495b-affc-91c0520b26ba\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.235924 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c135329-1c87-495b-affc-91c0520b26ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c135329-1c87-495b-affc-91c0520b26ba" (UID: "1c135329-1c87-495b-affc-91c0520b26ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.236264 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjvl9\" (UniqueName: \"kubernetes.io/projected/1c135329-1c87-495b-affc-91c0520b26ba-kube-api-access-bjvl9\") pod \"1c135329-1c87-495b-affc-91c0520b26ba\" (UID: \"1c135329-1c87-495b-affc-91c0520b26ba\") " Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.241298 5031 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c135329-1c87-495b-affc-91c0520b26ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.241338 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.241348 5031 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a612e054-14dc-48e1-b60a-4f75bbc44de2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.257695 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c135329-1c87-495b-affc-91c0520b26ba-kube-api-access-bjvl9" (OuterVolumeSpecName: "kube-api-access-bjvl9") pod "1c135329-1c87-495b-affc-91c0520b26ba" (UID: "1c135329-1c87-495b-affc-91c0520b26ba"). InnerVolumeSpecName "kube-api-access-bjvl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.343463 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjvl9\" (UniqueName: \"kubernetes.io/projected/1c135329-1c87-495b-affc-91c0520b26ba-kube-api-access-bjvl9\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.749445 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad","Type":"ContainerStarted","Data":"60b12b83310ff392874a38b37df08049ef7d9295e2e5de69075a1e5b8ec19dab"} Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.753269 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b47759886-4vh7j" event={"ID":"7cfc507f-5595-4ff5-9f5f-8942dc5468dc","Type":"ContainerStarted","Data":"35abb34cfefe5153ad2e03b2c39bbfb61ec1406be8c09c70ffc15b73a9554f60"} Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.756512 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.758191 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ba01-account-create-update-j9tfj" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.763352 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ba01-account-create-update-j9tfj" event={"ID":"1c135329-1c87-495b-affc-91c0520b26ba","Type":"ContainerDied","Data":"0cafc69722cb7ff781919b0a2c7ca45cc0acd3d420d7b60b2efb1dc1e40a153a"} Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.763663 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cafc69722cb7ff781919b0a2c7ca45cc0acd3d420d7b60b2efb1dc1e40a153a" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.816294 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.828499 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.836980 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:29:03 crc kubenswrapper[5031]: E0129 09:29:03.837962 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6022f9c4-3a0d-4f89-881d-b6a17970ac9b" containerName="mariadb-database-create" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.837982 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6022f9c4-3a0d-4f89-881d-b6a17970ac9b" containerName="mariadb-database-create" Jan 29 09:29:03 crc kubenswrapper[5031]: E0129 09:29:03.837999 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-log" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.838005 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-log" Jan 29 09:29:03 crc kubenswrapper[5031]: E0129 09:29:03.838059 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c135329-1c87-495b-affc-91c0520b26ba" containerName="mariadb-account-create-update" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.838067 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c135329-1c87-495b-affc-91c0520b26ba" containerName="mariadb-account-create-update" Jan 29 09:29:03 crc kubenswrapper[5031]: E0129 09:29:03.838076 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-httpd" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.838083 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-httpd" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.838425 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-log" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.838438 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="6022f9c4-3a0d-4f89-881d-b6a17970ac9b" containerName="mariadb-database-create" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.838484 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" containerName="glance-httpd" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.838495 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c135329-1c87-495b-affc-91c0520b26ba" containerName="mariadb-account-create-update" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.840241 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.843471 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.843699 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.844512 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961077 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-logs\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961144 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961169 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961220 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961239 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961262 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzdrz\" (UniqueName: \"kubernetes.io/projected/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-kube-api-access-qzdrz\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961311 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961382 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:03 crc kubenswrapper[5031]: I0129 09:29:03.961448 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063543 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-logs\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063606 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063628 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063666 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063684 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063702 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzdrz\" (UniqueName: \"kubernetes.io/projected/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-kube-api-access-qzdrz\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063736 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063776 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.063823 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.064157 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-logs\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.064796 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.065210 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.072738 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.072932 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.073306 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.076155 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.089444 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.095337 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzdrz\" (UniqueName: \"kubernetes.io/projected/4e136d48-7be7-4b0f-a45c-da6b3d218b8d-kube-api-access-qzdrz\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.142556 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e136d48-7be7-4b0f-a45c-da6b3d218b8d\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.184414 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.323419 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a612e054-14dc-48e1-b60a-4f75bbc44de2" path="/var/lib/kubelet/pods/a612e054-14dc-48e1-b60a-4f75bbc44de2/volumes" Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.779211 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad","Type":"ContainerStarted","Data":"d78ee828af413f13419dc76b297b76c1042b635d859c6f16bdd0b593038cb4ca"} Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.779778 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-log" containerID="cri-o://60b12b83310ff392874a38b37df08049ef7d9295e2e5de69075a1e5b8ec19dab" gracePeriod=30 Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.787564 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-httpd" containerID="cri-o://d78ee828af413f13419dc76b297b76c1042b635d859c6f16bdd0b593038cb4ca" gracePeriod=30 Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.798089 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:29:04 crc kubenswrapper[5031]: I0129 09:29:04.812350 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.812327391 podStartE2EDuration="4.812327391s" podCreationTimestamp="2026-01-29 09:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:04.810877802 +0000 UTC m=+3025.310465764" watchObservedRunningTime="2026-01-29 09:29:04.812327391 +0000 UTC m=+3025.311915353" Jan 29 09:29:05 crc kubenswrapper[5031]: I0129 09:29:05.792766 5031 generic.go:334] "Generic (PLEG): container finished" podID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerID="d78ee828af413f13419dc76b297b76c1042b635d859c6f16bdd0b593038cb4ca" exitCode=0 Jan 29 09:29:05 crc kubenswrapper[5031]: I0129 09:29:05.793240 5031 generic.go:334] "Generic (PLEG): container finished" podID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerID="60b12b83310ff392874a38b37df08049ef7d9295e2e5de69075a1e5b8ec19dab" exitCode=143 Jan 29 09:29:05 crc kubenswrapper[5031]: I0129 09:29:05.792846 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad","Type":"ContainerDied","Data":"d78ee828af413f13419dc76b297b76c1042b635d859c6f16bdd0b593038cb4ca"} Jan 29 09:29:05 crc kubenswrapper[5031]: I0129 09:29:05.793315 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad","Type":"ContainerDied","Data":"60b12b83310ff392874a38b37df08049ef7d9295e2e5de69075a1e5b8ec19dab"} Jan 29 09:29:05 crc kubenswrapper[5031]: I0129 09:29:05.796318 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e136d48-7be7-4b0f-a45c-da6b3d218b8d","Type":"ContainerStarted","Data":"85fafde52768934a0ecb3a47acac0616a7aab24bdd4a6d8378d90784730e0dda"} Jan 29 09:29:05 crc kubenswrapper[5031]: I0129 09:29:05.796702 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e136d48-7be7-4b0f-a45c-da6b3d218b8d","Type":"ContainerStarted","Data":"5b38d668f24f5651e52c9d4b9ef29534bbfb2489cd4420b3f5c5716fedc259c8"} Jan 29 09:29:06 crc kubenswrapper[5031]: I0129 09:29:06.283856 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:29:06 crc kubenswrapper[5031]: E0129 09:29:06.285795 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.118359 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.457210 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.677702 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-fmrct"] Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.679077 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.682184 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.682371 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-g9thb" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.697285 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fmrct"] Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.807220 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-config-data\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.807682 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-combined-ca-bundle\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.807765 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvswt\" (UniqueName: \"kubernetes.io/projected/73da3d2b-eb56-4382-9091-6d353d461127-kube-api-access-jvswt\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.807803 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-job-config-data\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.911229 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-job-config-data\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.911490 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-config-data\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.911551 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-combined-ca-bundle\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.911618 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvswt\" (UniqueName: \"kubernetes.io/projected/73da3d2b-eb56-4382-9091-6d353d461127-kube-api-access-jvswt\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.917808 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-config-data\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.918176 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-job-config-data\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.928228 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-combined-ca-bundle\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:08 crc kubenswrapper[5031]: I0129 09:29:08.929884 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvswt\" (UniqueName: \"kubernetes.io/projected/73da3d2b-eb56-4382-9091-6d353d461127-kube-api-access-jvswt\") pod \"manila-db-sync-fmrct\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:09 crc kubenswrapper[5031]: I0129 09:29:09.009181 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.187624 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259536 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-logs\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259583 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-ceph\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259622 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-public-tls-certs\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259669 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpmk7\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-kube-api-access-tpmk7\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259737 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-scripts\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259814 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-config-data\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259836 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-httpd-run\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259856 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-combined-ca-bundle\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.259947 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\" (UID: \"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad\") " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.260617 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.261926 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-logs" (OuterVolumeSpecName: "logs") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.265158 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-kube-api-access-tpmk7" (OuterVolumeSpecName: "kube-api-access-tpmk7") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "kube-api-access-tpmk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.269714 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-scripts" (OuterVolumeSpecName: "scripts") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.273853 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.273912 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-ceph" (OuterVolumeSpecName: "ceph") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.330223 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.363827 5031 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.363857 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.364035 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.364669 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpmk7\" (UniqueName: \"kubernetes.io/projected/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-kube-api-access-tpmk7\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.364697 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.364707 5031 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.364715 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.364771 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.371458 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-config-data" (OuterVolumeSpecName: "config-data") pod "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" (UID: "3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.393985 5031 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.466272 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.466301 5031 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.466311 5031 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.641332 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fmrct"] Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.879924 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b47759886-4vh7j" event={"ID":"7cfc507f-5595-4ff5-9f5f-8942dc5468dc","Type":"ContainerStarted","Data":"10ebec012fc6cc12a182ffc6ff10455f5c9140e16e55fe463a3b63e39be53b16"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.879972 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b47759886-4vh7j" event={"ID":"7cfc507f-5595-4ff5-9f5f-8942dc5468dc","Type":"ContainerStarted","Data":"d19465a5fda5b6d06e584af7e411167d8574ed48d6fa059d282292f351f83bce"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.880850 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fmrct" event={"ID":"73da3d2b-eb56-4382-9091-6d353d461127","Type":"ContainerStarted","Data":"f7bf34e2e47250bac8b8633107600f62995523040e8c5ebcbb9fa7827fdb5791"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.882676 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5df6bb9c74-nlm69" event={"ID":"a88f18bd-1a15-4a57-8ee9-4457fbd15905","Type":"ContainerStarted","Data":"9ac22e104a84b3a5f265e5851d0123ca0b36600e3ee0d502b6982b6f242f7c07"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.882705 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5df6bb9c74-nlm69" event={"ID":"a88f18bd-1a15-4a57-8ee9-4457fbd15905","Type":"ContainerStarted","Data":"2e32fa1359c13d0969696db27ec62a31f3c0af1897840cb4b6d5af323815d8a4"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.885603 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56bcfc8bf7-tfqs6" event={"ID":"5eda84d3-0c58-4449-80e1-5198ecb37e22","Type":"ContainerStarted","Data":"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.885632 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56bcfc8bf7-tfqs6" event={"ID":"5eda84d3-0c58-4449-80e1-5198ecb37e22","Type":"ContainerStarted","Data":"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.885811 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56bcfc8bf7-tfqs6" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon" containerID="cri-o://0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0" gracePeriod=30 Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.885810 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56bcfc8bf7-tfqs6" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon-log" containerID="cri-o://9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1" gracePeriod=30 Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.888421 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad","Type":"ContainerDied","Data":"4b3b8b2bb78f45cfebd5d9f92873de2223c9de9e2d5ba6804be94e5a13416798"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.888460 5031 scope.go:117] "RemoveContainer" containerID="d78ee828af413f13419dc76b297b76c1042b635d859c6f16bdd0b593038cb4ca" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.888497 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.895555 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e136d48-7be7-4b0f-a45c-da6b3d218b8d","Type":"ContainerStarted","Data":"e9930443b4aafa3b323170e1ecdac2f19a350ce5bc6d185a36ca5eb6ce9e517b"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.907287 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b47759886-4vh7j" podStartSLOduration=2.411468213 podStartE2EDuration="10.907259976s" podCreationTimestamp="2026-01-29 09:29:01 +0000 UTC" firstStartedPulling="2026-01-29 09:29:02.765945157 +0000 UTC m=+3023.265533109" lastFinishedPulling="2026-01-29 09:29:11.26173692 +0000 UTC m=+3031.761324872" observedRunningTime="2026-01-29 09:29:11.899119218 +0000 UTC m=+3032.398707180" watchObservedRunningTime="2026-01-29 09:29:11.907259976 +0000 UTC m=+3032.406847928" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.914164 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d75c5b5c-k4vm8" event={"ID":"b9752d31-4851-463a-9d9c-f27283dd5f54","Type":"ContainerStarted","Data":"fdc6a7a6321f4dc1b16f1f4c21efb15f116b3bb1f42b00571636d9d82419c3a6"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.914201 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54d75c5b5c-k4vm8" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon-log" containerID="cri-o://58ed655693c4ba29cce148da159a7281adfc8d7d16910978c1618e2b959258e1" gracePeriod=30 Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.914240 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d75c5b5c-k4vm8" event={"ID":"b9752d31-4851-463a-9d9c-f27283dd5f54","Type":"ContainerStarted","Data":"58ed655693c4ba29cce148da159a7281adfc8d7d16910978c1618e2b959258e1"} Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.914317 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54d75c5b5c-k4vm8" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon" containerID="cri-o://fdc6a7a6321f4dc1b16f1f4c21efb15f116b3bb1f42b00571636d9d82419c3a6" gracePeriod=30 Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.924262 5031 scope.go:117] "RemoveContainer" containerID="60b12b83310ff392874a38b37df08049ef7d9295e2e5de69075a1e5b8ec19dab" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.931781 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.931763442 podStartE2EDuration="8.931763442s" podCreationTimestamp="2026-01-29 09:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:11.927825426 +0000 UTC m=+3032.427413378" watchObservedRunningTime="2026-01-29 09:29:11.931763442 +0000 UTC m=+3032.431351394" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.954302 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5df6bb9c74-nlm69" podStartSLOduration=2.092769438 podStartE2EDuration="10.954284744s" podCreationTimestamp="2026-01-29 09:29:01 +0000 UTC" firstStartedPulling="2026-01-29 09:29:02.428349917 +0000 UTC m=+3022.927937869" lastFinishedPulling="2026-01-29 09:29:11.289865223 +0000 UTC m=+3031.789453175" observedRunningTime="2026-01-29 09:29:11.951209991 +0000 UTC m=+3032.450797963" watchObservedRunningTime="2026-01-29 09:29:11.954284744 +0000 UTC m=+3032.453872686" Jan 29 09:29:11 crc kubenswrapper[5031]: I0129 09:29:11.982233 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56bcfc8bf7-tfqs6" podStartSLOduration=2.12533791 podStartE2EDuration="13.982207481s" podCreationTimestamp="2026-01-29 09:28:58 +0000 UTC" firstStartedPulling="2026-01-29 09:28:59.434679536 +0000 UTC m=+3019.934267488" lastFinishedPulling="2026-01-29 09:29:11.291549107 +0000 UTC m=+3031.791137059" observedRunningTime="2026-01-29 09:29:11.975566023 +0000 UTC m=+3032.475153995" watchObservedRunningTime="2026-01-29 09:29:11.982207481 +0000 UTC m=+3032.481795443" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.001052 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54d75c5b5c-k4vm8" podStartSLOduration=2.447744062 podStartE2EDuration="14.001033064s" podCreationTimestamp="2026-01-29 09:28:58 +0000 UTC" firstStartedPulling="2026-01-29 09:28:59.705972352 +0000 UTC m=+3020.205560304" lastFinishedPulling="2026-01-29 09:29:11.259261354 +0000 UTC m=+3031.758849306" observedRunningTime="2026-01-29 09:29:11.99528229 +0000 UTC m=+3032.494870262" watchObservedRunningTime="2026-01-29 09:29:12.001033064 +0000 UTC m=+3032.500621016" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.022417 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.036751 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.055602 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:12 crc kubenswrapper[5031]: E0129 09:29:12.056078 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-httpd" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.056103 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-httpd" Jan 29 09:29:12 crc kubenswrapper[5031]: E0129 09:29:12.056122 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-log" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.056131 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-log" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.056634 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-httpd" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.056662 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" containerName="glance-log" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.061013 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.071350 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.077910 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.077942 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.189583 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190054 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e631cdf5-7a95-457f-95ac-8632231e0cd7-logs\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190113 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e631cdf5-7a95-457f-95ac-8632231e0cd7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190241 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxsc\" (UniqueName: \"kubernetes.io/projected/e631cdf5-7a95-457f-95ac-8632231e0cd7-kube-api-access-jpxsc\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190300 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e631cdf5-7a95-457f-95ac-8632231e0cd7-ceph\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190491 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190701 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190784 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.190840 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292350 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292480 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e631cdf5-7a95-457f-95ac-8632231e0cd7-logs\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292513 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e631cdf5-7a95-457f-95ac-8632231e0cd7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292584 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxsc\" (UniqueName: \"kubernetes.io/projected/e631cdf5-7a95-457f-95ac-8632231e0cd7-kube-api-access-jpxsc\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292622 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e631cdf5-7a95-457f-95ac-8632231e0cd7-ceph\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292705 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292751 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292784 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.292822 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.293934 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.294509 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e631cdf5-7a95-457f-95ac-8632231e0cd7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.294735 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e631cdf5-7a95-457f-95ac-8632231e0cd7-logs\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.297571 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad" path="/var/lib/kubelet/pods/3b8d9c18-a0c3-4765-bcd3-b1ff972cf0ad/volumes" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.299124 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.299474 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e631cdf5-7a95-457f-95ac-8632231e0cd7-ceph\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.300113 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.301021 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.306332 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e631cdf5-7a95-457f-95ac-8632231e0cd7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.314559 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxsc\" (UniqueName: \"kubernetes.io/projected/e631cdf5-7a95-457f-95ac-8632231e0cd7-kube-api-access-jpxsc\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.324791 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e631cdf5-7a95-457f-95ac-8632231e0cd7\") " pod="openstack/glance-default-external-api-0" Jan 29 09:29:12 crc kubenswrapper[5031]: I0129 09:29:12.457488 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:29:13 crc kubenswrapper[5031]: I0129 09:29:13.052720 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:29:13 crc kubenswrapper[5031]: I0129 09:29:13.958759 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e631cdf5-7a95-457f-95ac-8632231e0cd7","Type":"ContainerStarted","Data":"a2a533e0c7ef92611d563f7295fd152aa1e9c4dbea297d214ee108560614afa3"} Jan 29 09:29:13 crc kubenswrapper[5031]: I0129 09:29:13.960084 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e631cdf5-7a95-457f-95ac-8632231e0cd7","Type":"ContainerStarted","Data":"d5233e23ef46fa79eec0b97bff1d679988b92d68b75744d44f9fc16b407531c5"} Jan 29 09:29:14 crc kubenswrapper[5031]: I0129 09:29:14.184633 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:14 crc kubenswrapper[5031]: I0129 09:29:14.184696 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:14 crc kubenswrapper[5031]: I0129 09:29:14.258408 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:14 crc kubenswrapper[5031]: I0129 09:29:14.258959 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:14 crc kubenswrapper[5031]: I0129 09:29:14.975026 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e631cdf5-7a95-457f-95ac-8632231e0cd7","Type":"ContainerStarted","Data":"971ee9fd4b80b8d1bafcf40824bd7cd40d164956694354f259b8c39cb5e7cad6"} Jan 29 09:29:14 crc kubenswrapper[5031]: I0129 09:29:14.975093 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:14 crc kubenswrapper[5031]: I0129 09:29:14.975108 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:15 crc kubenswrapper[5031]: I0129 09:29:15.061786 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.061763898 podStartE2EDuration="3.061763898s" podCreationTimestamp="2026-01-29 09:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:15.060990347 +0000 UTC m=+3035.560578299" watchObservedRunningTime="2026-01-29 09:29:15.061763898 +0000 UTC m=+3035.561351850" Jan 29 09:29:16 crc kubenswrapper[5031]: I0129 09:29:16.996809 5031 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:29:18 crc kubenswrapper[5031]: I0129 09:29:18.606546 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:29:18 crc kubenswrapper[5031]: I0129 09:29:18.944283 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:29:19 crc kubenswrapper[5031]: I0129 09:29:19.018598 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fmrct" event={"ID":"73da3d2b-eb56-4382-9091-6d353d461127","Type":"ContainerStarted","Data":"82a4f282acb27b575f301c924a204e3ba6d40f2b111b0191d052f1ebbc322763"} Jan 29 09:29:19 crc kubenswrapper[5031]: I0129 09:29:19.039154 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-fmrct" podStartSLOduration=4.433310326 podStartE2EDuration="11.039138359s" podCreationTimestamp="2026-01-29 09:29:08 +0000 UTC" firstStartedPulling="2026-01-29 09:29:11.653725495 +0000 UTC m=+3032.153313447" lastFinishedPulling="2026-01-29 09:29:18.259553528 +0000 UTC m=+3038.759141480" observedRunningTime="2026-01-29 09:29:19.034505455 +0000 UTC m=+3039.534093407" watchObservedRunningTime="2026-01-29 09:29:19.039138359 +0000 UTC m=+3039.538726311" Jan 29 09:29:19 crc kubenswrapper[5031]: I0129 09:29:19.062895 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:19 crc kubenswrapper[5031]: I0129 09:29:19.065899 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:29:20 crc kubenswrapper[5031]: I0129 09:29:20.290219 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:29:20 crc kubenswrapper[5031]: E0129 09:29:20.290891 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.050411 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.052294 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.052321 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.052601 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.058410 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5df6bb9c74-nlm69" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.065492 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b47759886-4vh7j" podUID="7cfc507f-5595-4ff5-9f5f-8942dc5468dc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.458821 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.458881 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.496033 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:29:22 crc kubenswrapper[5031]: I0129 09:29:22.502411 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:29:23 crc kubenswrapper[5031]: I0129 09:29:23.076721 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:29:23 crc kubenswrapper[5031]: I0129 09:29:23.076758 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:29:25 crc kubenswrapper[5031]: I0129 09:29:25.143015 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:29:25 crc kubenswrapper[5031]: I0129 09:29:25.143144 5031 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:29:25 crc kubenswrapper[5031]: I0129 09:29:25.158477 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:29:29 crc kubenswrapper[5031]: I0129 09:29:29.131134 5031 generic.go:334] "Generic (PLEG): container finished" podID="73da3d2b-eb56-4382-9091-6d353d461127" containerID="82a4f282acb27b575f301c924a204e3ba6d40f2b111b0191d052f1ebbc322763" exitCode=0 Jan 29 09:29:29 crc kubenswrapper[5031]: I0129 09:29:29.131248 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fmrct" event={"ID":"73da3d2b-eb56-4382-9091-6d353d461127","Type":"ContainerDied","Data":"82a4f282acb27b575f301c924a204e3ba6d40f2b111b0191d052f1ebbc322763"} Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.629785 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.748952 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvswt\" (UniqueName: \"kubernetes.io/projected/73da3d2b-eb56-4382-9091-6d353d461127-kube-api-access-jvswt\") pod \"73da3d2b-eb56-4382-9091-6d353d461127\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.749021 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-job-config-data\") pod \"73da3d2b-eb56-4382-9091-6d353d461127\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.750041 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-combined-ca-bundle\") pod \"73da3d2b-eb56-4382-9091-6d353d461127\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.750216 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-config-data\") pod \"73da3d2b-eb56-4382-9091-6d353d461127\" (UID: \"73da3d2b-eb56-4382-9091-6d353d461127\") " Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.755325 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73da3d2b-eb56-4382-9091-6d353d461127-kube-api-access-jvswt" (OuterVolumeSpecName: "kube-api-access-jvswt") pod "73da3d2b-eb56-4382-9091-6d353d461127" (UID: "73da3d2b-eb56-4382-9091-6d353d461127"). InnerVolumeSpecName "kube-api-access-jvswt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.757910 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-config-data" (OuterVolumeSpecName: "config-data") pod "73da3d2b-eb56-4382-9091-6d353d461127" (UID: "73da3d2b-eb56-4382-9091-6d353d461127"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.760100 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "73da3d2b-eb56-4382-9091-6d353d461127" (UID: "73da3d2b-eb56-4382-9091-6d353d461127"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.792300 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73da3d2b-eb56-4382-9091-6d353d461127" (UID: "73da3d2b-eb56-4382-9091-6d353d461127"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.853276 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.853318 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.853331 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvswt\" (UniqueName: \"kubernetes.io/projected/73da3d2b-eb56-4382-9091-6d353d461127-kube-api-access-jvswt\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[5031]: I0129 09:29:30.853345 5031 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/73da3d2b-eb56-4382-9091-6d353d461127-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.151270 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fmrct" event={"ID":"73da3d2b-eb56-4382-9091-6d353d461127","Type":"ContainerDied","Data":"f7bf34e2e47250bac8b8633107600f62995523040e8c5ebcbb9fa7827fdb5791"} Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.151586 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7bf34e2e47250bac8b8633107600f62995523040e8c5ebcbb9fa7827fdb5791" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.151322 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fmrct" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.538251 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:31 crc kubenswrapper[5031]: E0129 09:29:31.538750 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73da3d2b-eb56-4382-9091-6d353d461127" containerName="manila-db-sync" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.538770 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="73da3d2b-eb56-4382-9091-6d353d461127" containerName="manila-db-sync" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.538994 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="73da3d2b-eb56-4382-9091-6d353d461127" containerName="manila-db-sync" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.540771 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.549804 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-g9thb" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.549899 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.549946 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.554477 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.567263 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.567322 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-scripts\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.567431 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.567450 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.567532 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv8nx\" (UniqueName: \"kubernetes.io/projected/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-kube-api-access-vv8nx\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.567561 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.578963 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.581233 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.586758 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.604571 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.613937 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5df6bb9c74-nlm69" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.631441 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.668888 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv8nx\" (UniqueName: \"kubernetes.io/projected/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-kube-api-access-vv8nx\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.668945 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.669002 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.669031 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-scripts\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.669103 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.669123 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.674152 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.674242 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.676083 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.695995 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-scripts\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.707342 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.712489 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-ptpjh"] Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.719321 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.723933 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv8nx\" (UniqueName: \"kubernetes.io/projected/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-kube-api-access-vv8nx\") pod \"manila-scheduler-0\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.740623 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-ptpjh"] Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.757427 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b47759886-4vh7j" podUID="7cfc507f-5595-4ff5-9f5f-8942dc5468dc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771613 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771660 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n4c8\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-kube-api-access-9n4c8\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771686 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771710 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771736 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771801 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771833 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-scripts\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.771925 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-ceph\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.875923 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.875986 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876010 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876071 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-config\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876187 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876211 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n4c8\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-kube-api-access-9n4c8\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876243 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvnx\" (UniqueName: \"kubernetes.io/projected/3b0d7949-564d-4b3d-84f8-038fc952a24f-kube-api-access-pdvnx\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876267 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876298 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876330 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876349 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876509 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.876994 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-scripts\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.877191 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-ceph\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.881886 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.883745 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.883860 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-ceph\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.884194 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.889608 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.900920 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.901079 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.901529 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-scripts\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.911478 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n4c8\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-kube-api-access-9n4c8\") pod \"manila-share-share1-0\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.924308 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.965777 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.967509 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.980004 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvnx\" (UniqueName: \"kubernetes.io/projected/3b0d7949-564d-4b3d-84f8-038fc952a24f-kube-api-access-pdvnx\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.980067 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.980262 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.980297 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.980330 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.981240 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.981449 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.982389 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.982434 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-config\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.983078 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.984501 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:31 crc kubenswrapper[5031]: I0129 09:29:31.984743 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b0d7949-564d-4b3d-84f8-038fc952a24f-config\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.008930 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.018065 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvnx\" (UniqueName: \"kubernetes.io/projected/3b0d7949-564d-4b3d-84f8-038fc952a24f-kube-api-access-pdvnx\") pod \"dnsmasq-dns-69655fd4bf-ptpjh\" (UID: \"3b0d7949-564d-4b3d-84f8-038fc952a24f\") " pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.085480 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.085562 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.085589 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8lcf\" (UniqueName: \"kubernetes.io/projected/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-kube-api-access-z8lcf\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.085650 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-logs\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.085690 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-scripts\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.085716 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data-custom\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.085745 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-etc-machine-id\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.099747 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.187230 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data-custom\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.187512 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-etc-machine-id\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.187582 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.187637 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.187661 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8lcf\" (UniqueName: \"kubernetes.io/projected/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-kube-api-access-z8lcf\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.187696 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-logs\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.187728 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-scripts\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.191530 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-etc-machine-id\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.192090 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-logs\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.195270 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.198320 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.207660 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data-custom\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.210757 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-scripts\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.217574 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8lcf\" (UniqueName: \"kubernetes.io/projected/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-kube-api-access-z8lcf\") pod \"manila-api-0\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.287117 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:29:32 crc kubenswrapper[5031]: E0129 09:29:32.287590 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.365016 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.565500 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.669807 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:29:32 crc kubenswrapper[5031]: I0129 09:29:32.812222 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-ptpjh"] Jan 29 09:29:32 crc kubenswrapper[5031]: W0129 09:29:32.820728 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b0d7949_564d_4b3d_84f8_038fc952a24f.slice/crio-3c17be107b79f493ac56450ec363751ac2b6b0f02c0847504368d336482201ff WatchSource:0}: Error finding container 3c17be107b79f493ac56450ec363751ac2b6b0f02c0847504368d336482201ff: Status 404 returned error can't find the container with id 3c17be107b79f493ac56450ec363751ac2b6b0f02c0847504368d336482201ff Jan 29 09:29:33 crc kubenswrapper[5031]: I0129 09:29:33.124626 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:33 crc kubenswrapper[5031]: W0129 09:29:33.183068 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1db2e65d_f3a3_42d6_abcd_f4d7f9c6fcd9.slice/crio-ec0e48008a472a30009a326c33c2e8d382c2f93f8ee9a6c2ca382811b91d4682 WatchSource:0}: Error finding container ec0e48008a472a30009a326c33c2e8d382c2f93f8ee9a6c2ca382811b91d4682: Status 404 returned error can't find the container with id ec0e48008a472a30009a326c33c2e8d382c2f93f8ee9a6c2ca382811b91d4682 Jan 29 09:29:33 crc kubenswrapper[5031]: I0129 09:29:33.273672 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3da33a2c-cc44-487d-9679-d586a82652b8","Type":"ContainerStarted","Data":"5f259c745b4dc593f4899ee1ccf1f38ae13f99e581326810e283f09cc86fd630"} Jan 29 09:29:33 crc kubenswrapper[5031]: I0129 09:29:33.277500 5031 generic.go:334] "Generic (PLEG): container finished" podID="3b0d7949-564d-4b3d-84f8-038fc952a24f" containerID="48c3514ca9e6d6660b95ecd2b0efaef44772376954096cec04fb7991bdc720af" exitCode=0 Jan 29 09:29:33 crc kubenswrapper[5031]: I0129 09:29:33.277621 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" event={"ID":"3b0d7949-564d-4b3d-84f8-038fc952a24f","Type":"ContainerDied","Data":"48c3514ca9e6d6660b95ecd2b0efaef44772376954096cec04fb7991bdc720af"} Jan 29 09:29:33 crc kubenswrapper[5031]: I0129 09:29:33.277683 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" event={"ID":"3b0d7949-564d-4b3d-84f8-038fc952a24f","Type":"ContainerStarted","Data":"3c17be107b79f493ac56450ec363751ac2b6b0f02c0847504368d336482201ff"} Jan 29 09:29:33 crc kubenswrapper[5031]: I0129 09:29:33.280433 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e","Type":"ContainerStarted","Data":"b2984e763797e59d58eb94d73f8459acde17d6b05f2521cc398f2f826cb7d729"} Jan 29 09:29:33 crc kubenswrapper[5031]: I0129 09:29:33.283210 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9","Type":"ContainerStarted","Data":"ec0e48008a472a30009a326c33c2e8d382c2f93f8ee9a6c2ca382811b91d4682"} Jan 29 09:29:34 crc kubenswrapper[5031]: I0129 09:29:34.305076 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9","Type":"ContainerStarted","Data":"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c"} Jan 29 09:29:34 crc kubenswrapper[5031]: I0129 09:29:34.325463 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" event={"ID":"3b0d7949-564d-4b3d-84f8-038fc952a24f","Type":"ContainerStarted","Data":"87c6857cbc4d020316bfdae24be1c51bc17d038c32812b67fd0bf24497d1e1c0"} Jan 29 09:29:34 crc kubenswrapper[5031]: I0129 09:29:34.326073 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:34 crc kubenswrapper[5031]: I0129 09:29:34.334955 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e","Type":"ContainerStarted","Data":"bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117"} Jan 29 09:29:34 crc kubenswrapper[5031]: I0129 09:29:34.352506 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" podStartSLOduration=3.352478566 podStartE2EDuration="3.352478566s" podCreationTimestamp="2026-01-29 09:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:34.349924728 +0000 UTC m=+3054.849512680" watchObservedRunningTime="2026-01-29 09:29:34.352478566 +0000 UTC m=+3054.852066518" Jan 29 09:29:34 crc kubenswrapper[5031]: I0129 09:29:34.510989 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:35 crc kubenswrapper[5031]: I0129 09:29:35.355550 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9","Type":"ContainerStarted","Data":"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd"} Jan 29 09:29:35 crc kubenswrapper[5031]: I0129 09:29:35.356128 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 29 09:29:35 crc kubenswrapper[5031]: I0129 09:29:35.355904 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api-log" containerID="cri-o://bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c" gracePeriod=30 Jan 29 09:29:35 crc kubenswrapper[5031]: I0129 09:29:35.356234 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api" containerID="cri-o://862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd" gracePeriod=30 Jan 29 09:29:35 crc kubenswrapper[5031]: I0129 09:29:35.363222 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e","Type":"ContainerStarted","Data":"f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882"} Jan 29 09:29:35 crc kubenswrapper[5031]: I0129 09:29:35.389226 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.389203564 podStartE2EDuration="4.389203564s" podCreationTimestamp="2026-01-29 09:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:35.371999978 +0000 UTC m=+3055.871587930" watchObservedRunningTime="2026-01-29 09:29:35.389203564 +0000 UTC m=+3055.888791516" Jan 29 09:29:35 crc kubenswrapper[5031]: I0129 09:29:35.404604 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.72275117 podStartE2EDuration="4.404579782s" podCreationTimestamp="2026-01-29 09:29:31 +0000 UTC" firstStartedPulling="2026-01-29 09:29:32.563613259 +0000 UTC m=+3053.063201211" lastFinishedPulling="2026-01-29 09:29:33.245441871 +0000 UTC m=+3053.745029823" observedRunningTime="2026-01-29 09:29:35.400574206 +0000 UTC m=+3055.900162158" watchObservedRunningTime="2026-01-29 09:29:35.404579782 +0000 UTC m=+3055.904167734" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.092831 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.197193 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data-custom\") pod \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.197610 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-logs\") pod \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.197638 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-scripts\") pod \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.197677 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8lcf\" (UniqueName: \"kubernetes.io/projected/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-kube-api-access-z8lcf\") pod \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.197719 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data\") pod \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.197821 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-combined-ca-bundle\") pod \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.198004 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-etc-machine-id\") pod \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\" (UID: \"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9\") " Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.199264 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-logs" (OuterVolumeSpecName: "logs") pod "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" (UID: "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.199615 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" (UID: "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.199857 5031 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.199903 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.208314 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" (UID: "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.212733 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-kube-api-access-z8lcf" (OuterVolumeSpecName: "kube-api-access-z8lcf") pod "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" (UID: "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9"). InnerVolumeSpecName "kube-api-access-z8lcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.212850 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-scripts" (OuterVolumeSpecName: "scripts") pod "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" (UID: "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.253919 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data" (OuterVolumeSpecName: "config-data") pod "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" (UID: "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.271196 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" (UID: "1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.301573 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.301610 5031 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.301622 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.301635 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8lcf\" (UniqueName: \"kubernetes.io/projected/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-kube-api-access-z8lcf\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.301653 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.377339 5031 generic.go:334] "Generic (PLEG): container finished" podID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerID="862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd" exitCode=143 Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.377401 5031 generic.go:334] "Generic (PLEG): container finished" podID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerID="bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c" exitCode=143 Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.378466 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.378939 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9","Type":"ContainerDied","Data":"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd"} Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.378965 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9","Type":"ContainerDied","Data":"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c"} Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.378978 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9","Type":"ContainerDied","Data":"ec0e48008a472a30009a326c33c2e8d382c2f93f8ee9a6c2ca382811b91d4682"} Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.378995 5031 scope.go:117] "RemoveContainer" containerID="862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.426607 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.448834 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.465453 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:36 crc kubenswrapper[5031]: E0129 09:29:36.466171 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api-log" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.466258 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api-log" Jan 29 09:29:36 crc kubenswrapper[5031]: E0129 09:29:36.466346 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.466418 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.466692 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.466781 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" containerName="manila-api-log" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.468114 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.479260 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.481738 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.481819 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.482174 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.482831 5031 scope.go:117] "RemoveContainer" containerID="bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.564621 5031 scope.go:117] "RemoveContainer" containerID="862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd" Jan 29 09:29:36 crc kubenswrapper[5031]: E0129 09:29:36.565920 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd\": container with ID starting with 862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd not found: ID does not exist" containerID="862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.565989 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd"} err="failed to get container status \"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd\": rpc error: code = NotFound desc = could not find container \"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd\": container with ID starting with 862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd not found: ID does not exist" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.566034 5031 scope.go:117] "RemoveContainer" containerID="bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c" Jan 29 09:29:36 crc kubenswrapper[5031]: E0129 09:29:36.566611 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c\": container with ID starting with bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c not found: ID does not exist" containerID="bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.566641 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c"} err="failed to get container status \"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c\": rpc error: code = NotFound desc = could not find container \"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c\": container with ID starting with bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c not found: ID does not exist" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.566660 5031 scope.go:117] "RemoveContainer" containerID="862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.573431 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd"} err="failed to get container status \"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd\": rpc error: code = NotFound desc = could not find container \"862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd\": container with ID starting with 862cb9e8ba625700d61fcbe6466836fa99e62e75af73ac8838ad6889639157dd not found: ID does not exist" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.573484 5031 scope.go:117] "RemoveContainer" containerID="bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.575576 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c"} err="failed to get container status \"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c\": rpc error: code = NotFound desc = could not find container \"bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c\": container with ID starting with bd956b18aec16c1fa4354cd458480a8fcbb98d3a71849bd208db04895ccf861c not found: ID does not exist" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607431 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607500 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-etc-machine-id\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607572 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-public-tls-certs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607598 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-scripts\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607713 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdhf\" (UniqueName: \"kubernetes.io/projected/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-kube-api-access-krdhf\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607775 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-config-data\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607821 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-config-data-custom\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607862 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.607922 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-logs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709331 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-public-tls-certs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709409 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-scripts\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709495 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdhf\" (UniqueName: \"kubernetes.io/projected/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-kube-api-access-krdhf\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709541 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-config-data\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709558 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-config-data-custom\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709584 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709640 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-logs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709689 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709716 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-etc-machine-id\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.709811 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-etc-machine-id\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.712923 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-logs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.716404 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-config-data-custom\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.717780 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-config-data\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.718264 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-scripts\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.719111 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.719983 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.738002 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-public-tls-certs\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.741958 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdhf\" (UniqueName: \"kubernetes.io/projected/2ce35ae9-25db-409d-af6b-0f5d94e61ea7-kube-api-access-krdhf\") pod \"manila-api-0\" (UID: \"2ce35ae9-25db-409d-af6b-0f5d94e61ea7\") " pod="openstack/manila-api-0" Jan 29 09:29:36 crc kubenswrapper[5031]: I0129 09:29:36.842295 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 09:29:37 crc kubenswrapper[5031]: I0129 09:29:37.444883 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 09:29:37 crc kubenswrapper[5031]: W0129 09:29:37.446335 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce35ae9_25db_409d_af6b_0f5d94e61ea7.slice/crio-742665f1750887323f411c4cf2b4692aeaf98b4ae9a713e30c770109fa1cc3f4 WatchSource:0}: Error finding container 742665f1750887323f411c4cf2b4692aeaf98b4ae9a713e30c770109fa1cc3f4: Status 404 returned error can't find the container with id 742665f1750887323f411c4cf2b4692aeaf98b4ae9a713e30c770109fa1cc3f4 Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.320953 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9" path="/var/lib/kubelet/pods/1db2e65d-f3a3-42d6-abcd-f4d7f9c6fcd9/volumes" Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.403096 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2ce35ae9-25db-409d-af6b-0f5d94e61ea7","Type":"ContainerStarted","Data":"15dfd93ec5c45143f8a272aead4bc80586a0167285e36e8f462b8ddb2ec5b695"} Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.403141 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2ce35ae9-25db-409d-af6b-0f5d94e61ea7","Type":"ContainerStarted","Data":"de0c2627f8ce9246b2dd8707cabbb25d17ebaeb97c9b3eb36d484115bdccbf87"} Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.403153 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2ce35ae9-25db-409d-af6b-0f5d94e61ea7","Type":"ContainerStarted","Data":"742665f1750887323f411c4cf2b4692aeaf98b4ae9a713e30c770109fa1cc3f4"} Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.403258 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.430626 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=2.430600216 podStartE2EDuration="2.430600216s" podCreationTimestamp="2026-01-29 09:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:38.420675242 +0000 UTC m=+3058.920263194" watchObservedRunningTime="2026-01-29 09:29:38.430600216 +0000 UTC m=+3058.930188188" Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.494834 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.495163 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-central-agent" containerID="cri-o://d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156" gracePeriod=30 Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.495669 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="proxy-httpd" containerID="cri-o://04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b" gracePeriod=30 Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.495727 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="sg-core" containerID="cri-o://13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57" gracePeriod=30 Jan 29 09:29:38 crc kubenswrapper[5031]: I0129 09:29:38.495793 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-notification-agent" containerID="cri-o://aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991" gracePeriod=30 Jan 29 09:29:39 crc kubenswrapper[5031]: I0129 09:29:39.415239 5031 generic.go:334] "Generic (PLEG): container finished" podID="96cf2a84-0927-4208-8959-96682bf54375" containerID="04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b" exitCode=0 Jan 29 09:29:39 crc kubenswrapper[5031]: I0129 09:29:39.415561 5031 generic.go:334] "Generic (PLEG): container finished" podID="96cf2a84-0927-4208-8959-96682bf54375" containerID="13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57" exitCode=2 Jan 29 09:29:39 crc kubenswrapper[5031]: I0129 09:29:39.415570 5031 generic.go:334] "Generic (PLEG): container finished" podID="96cf2a84-0927-4208-8959-96682bf54375" containerID="d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156" exitCode=0 Jan 29 09:29:39 crc kubenswrapper[5031]: I0129 09:29:39.415304 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerDied","Data":"04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b"} Jan 29 09:29:39 crc kubenswrapper[5031]: I0129 09:29:39.415651 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerDied","Data":"13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57"} Jan 29 09:29:39 crc kubenswrapper[5031]: I0129 09:29:39.415703 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerDied","Data":"d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156"} Jan 29 09:29:41 crc kubenswrapper[5031]: I0129 09:29:41.883426 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.102245 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-ptpjh" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.191573 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-vgw8k"] Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.191852 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" podUID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerName="dnsmasq-dns" containerID="cri-o://541d39aaab80762a1903bd6d6d3ba809648d9bfec33ccc5156b026a0496091e5" gracePeriod=10 Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.460630 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.475028 5031 generic.go:334] "Generic (PLEG): container finished" podID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerID="0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0" exitCode=137 Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.475060 5031 generic.go:334] "Generic (PLEG): container finished" podID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerID="9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1" exitCode=137 Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.475111 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56bcfc8bf7-tfqs6" event={"ID":"5eda84d3-0c58-4449-80e1-5198ecb37e22","Type":"ContainerDied","Data":"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0"} Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.475141 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56bcfc8bf7-tfqs6" event={"ID":"5eda84d3-0c58-4449-80e1-5198ecb37e22","Type":"ContainerDied","Data":"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1"} Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.475151 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56bcfc8bf7-tfqs6" event={"ID":"5eda84d3-0c58-4449-80e1-5198ecb37e22","Type":"ContainerDied","Data":"81840adab319d5b10d20cba8c2abfae1c431ba7a89b651f0d07f5c2fd6bfb6c0"} Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.475167 5031 scope.go:117] "RemoveContainer" containerID="0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.475296 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56bcfc8bf7-tfqs6" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.493809 5031 generic.go:334] "Generic (PLEG): container finished" podID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerID="fdc6a7a6321f4dc1b16f1f4c21efb15f116b3bb1f42b00571636d9d82419c3a6" exitCode=137 Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.493851 5031 generic.go:334] "Generic (PLEG): container finished" podID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerID="58ed655693c4ba29cce148da159a7281adfc8d7d16910978c1618e2b959258e1" exitCode=137 Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.493899 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d75c5b5c-k4vm8" event={"ID":"b9752d31-4851-463a-9d9c-f27283dd5f54","Type":"ContainerDied","Data":"fdc6a7a6321f4dc1b16f1f4c21efb15f116b3bb1f42b00571636d9d82419c3a6"} Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.493932 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d75c5b5c-k4vm8" event={"ID":"b9752d31-4851-463a-9d9c-f27283dd5f54","Type":"ContainerDied","Data":"58ed655693c4ba29cce148da159a7281adfc8d7d16910978c1618e2b959258e1"} Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.500707 5031 generic.go:334] "Generic (PLEG): container finished" podID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerID="541d39aaab80762a1903bd6d6d3ba809648d9bfec33ccc5156b026a0496091e5" exitCode=0 Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.500784 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" event={"ID":"415da4d0-c38a-48ff-a0ed-8dccab506bca","Type":"ContainerDied","Data":"541d39aaab80762a1903bd6d6d3ba809648d9bfec33ccc5156b026a0496091e5"} Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.510443 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3da33a2c-cc44-487d-9679-d586a82652b8","Type":"ContainerStarted","Data":"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9"} Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.553913 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eda84d3-0c58-4449-80e1-5198ecb37e22-logs\") pod \"5eda84d3-0c58-4449-80e1-5198ecb37e22\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.554005 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slvh2\" (UniqueName: \"kubernetes.io/projected/5eda84d3-0c58-4449-80e1-5198ecb37e22-kube-api-access-slvh2\") pod \"5eda84d3-0c58-4449-80e1-5198ecb37e22\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.554033 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-config-data\") pod \"5eda84d3-0c58-4449-80e1-5198ecb37e22\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.554335 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5eda84d3-0c58-4449-80e1-5198ecb37e22-horizon-secret-key\") pod \"5eda84d3-0c58-4449-80e1-5198ecb37e22\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.554393 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-scripts\") pod \"5eda84d3-0c58-4449-80e1-5198ecb37e22\" (UID: \"5eda84d3-0c58-4449-80e1-5198ecb37e22\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.556287 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eda84d3-0c58-4449-80e1-5198ecb37e22-logs" (OuterVolumeSpecName: "logs") pod "5eda84d3-0c58-4449-80e1-5198ecb37e22" (UID: "5eda84d3-0c58-4449-80e1-5198ecb37e22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.556594 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.580814 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eda84d3-0c58-4449-80e1-5198ecb37e22-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5eda84d3-0c58-4449-80e1-5198ecb37e22" (UID: "5eda84d3-0c58-4449-80e1-5198ecb37e22"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.587613 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eda84d3-0c58-4449-80e1-5198ecb37e22-kube-api-access-slvh2" (OuterVolumeSpecName: "kube-api-access-slvh2") pod "5eda84d3-0c58-4449-80e1-5198ecb37e22" (UID: "5eda84d3-0c58-4449-80e1-5198ecb37e22"). InnerVolumeSpecName "kube-api-access-slvh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.588430 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-scripts" (OuterVolumeSpecName: "scripts") pod "5eda84d3-0c58-4449-80e1-5198ecb37e22" (UID: "5eda84d3-0c58-4449-80e1-5198ecb37e22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.604712 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-config-data" (OuterVolumeSpecName: "config-data") pod "5eda84d3-0c58-4449-80e1-5198ecb37e22" (UID: "5eda84d3-0c58-4449-80e1-5198ecb37e22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.659900 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9752d31-4851-463a-9d9c-f27283dd5f54-horizon-secret-key\") pod \"b9752d31-4851-463a-9d9c-f27283dd5f54\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.660101 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-scripts\") pod \"b9752d31-4851-463a-9d9c-f27283dd5f54\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.660123 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9752d31-4851-463a-9d9c-f27283dd5f54-logs\") pod \"b9752d31-4851-463a-9d9c-f27283dd5f54\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.660182 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-config-data\") pod \"b9752d31-4851-463a-9d9c-f27283dd5f54\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.660529 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hcs4\" (UniqueName: \"kubernetes.io/projected/b9752d31-4851-463a-9d9c-f27283dd5f54-kube-api-access-5hcs4\") pod \"b9752d31-4851-463a-9d9c-f27283dd5f54\" (UID: \"b9752d31-4851-463a-9d9c-f27283dd5f54\") " Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.661144 5031 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5eda84d3-0c58-4449-80e1-5198ecb37e22-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.661164 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.661173 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eda84d3-0c58-4449-80e1-5198ecb37e22-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.661183 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slvh2\" (UniqueName: \"kubernetes.io/projected/5eda84d3-0c58-4449-80e1-5198ecb37e22-kube-api-access-slvh2\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.661194 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eda84d3-0c58-4449-80e1-5198ecb37e22-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.662777 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9752d31-4851-463a-9d9c-f27283dd5f54-logs" (OuterVolumeSpecName: "logs") pod "b9752d31-4851-463a-9d9c-f27283dd5f54" (UID: "b9752d31-4851-463a-9d9c-f27283dd5f54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.668587 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9752d31-4851-463a-9d9c-f27283dd5f54-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b9752d31-4851-463a-9d9c-f27283dd5f54" (UID: "b9752d31-4851-463a-9d9c-f27283dd5f54"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.698482 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-scripts" (OuterVolumeSpecName: "scripts") pod "b9752d31-4851-463a-9d9c-f27283dd5f54" (UID: "b9752d31-4851-463a-9d9c-f27283dd5f54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.711146 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9752d31-4851-463a-9d9c-f27283dd5f54-kube-api-access-5hcs4" (OuterVolumeSpecName: "kube-api-access-5hcs4") pod "b9752d31-4851-463a-9d9c-f27283dd5f54" (UID: "b9752d31-4851-463a-9d9c-f27283dd5f54"). InnerVolumeSpecName "kube-api-access-5hcs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.725851 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-config-data" (OuterVolumeSpecName: "config-data") pod "b9752d31-4851-463a-9d9c-f27283dd5f54" (UID: "b9752d31-4851-463a-9d9c-f27283dd5f54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.737212 5031 scope.go:117] "RemoveContainer" containerID="9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.763888 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9752d31-4851-463a-9d9c-f27283dd5f54-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.764211 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.764221 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9752d31-4851-463a-9d9c-f27283dd5f54-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.764235 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hcs4\" (UniqueName: \"kubernetes.io/projected/b9752d31-4851-463a-9d9c-f27283dd5f54-kube-api-access-5hcs4\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.764248 5031 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9752d31-4851-463a-9d9c-f27283dd5f54-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.788316 5031 scope.go:117] "RemoveContainer" containerID="0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0" Jan 29 09:29:42 crc kubenswrapper[5031]: E0129 09:29:42.788844 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0\": container with ID starting with 0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0 not found: ID does not exist" containerID="0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.788898 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0"} err="failed to get container status \"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0\": rpc error: code = NotFound desc = could not find container \"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0\": container with ID starting with 0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0 not found: ID does not exist" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.788923 5031 scope.go:117] "RemoveContainer" containerID="9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1" Jan 29 09:29:42 crc kubenswrapper[5031]: E0129 09:29:42.789513 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1\": container with ID starting with 9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1 not found: ID does not exist" containerID="9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.789560 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1"} err="failed to get container status \"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1\": rpc error: code = NotFound desc = could not find container \"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1\": container with ID starting with 9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1 not found: ID does not exist" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.789579 5031 scope.go:117] "RemoveContainer" containerID="0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.790100 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0"} err="failed to get container status \"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0\": rpc error: code = NotFound desc = could not find container \"0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0\": container with ID starting with 0e491c97f4b1a5e0ff5b7cb0c08a5dbbc4450b6f32c2020980a23b9dabf8d3d0 not found: ID does not exist" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.790129 5031 scope.go:117] "RemoveContainer" containerID="9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.790340 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1"} err="failed to get container status \"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1\": rpc error: code = NotFound desc = could not find container \"9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1\": container with ID starting with 9ec7c0049f40c811c784c21744903fbaeee4d9fdbf968c491ad0ed2a542985d1 not found: ID does not exist" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.957730 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.977248 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56bcfc8bf7-tfqs6"] Jan 29 09:29:42 crc kubenswrapper[5031]: I0129 09:29:42.986289 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56bcfc8bf7-tfqs6"] Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.077876 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-config\") pod \"415da4d0-c38a-48ff-a0ed-8dccab506bca\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.077930 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mz7m\" (UniqueName: \"kubernetes.io/projected/415da4d0-c38a-48ff-a0ed-8dccab506bca-kube-api-access-7mz7m\") pod \"415da4d0-c38a-48ff-a0ed-8dccab506bca\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.077952 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-nb\") pod \"415da4d0-c38a-48ff-a0ed-8dccab506bca\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.078059 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-dns-svc\") pod \"415da4d0-c38a-48ff-a0ed-8dccab506bca\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.078095 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-openstack-edpm-ipam\") pod \"415da4d0-c38a-48ff-a0ed-8dccab506bca\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.078141 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-sb\") pod \"415da4d0-c38a-48ff-a0ed-8dccab506bca\" (UID: \"415da4d0-c38a-48ff-a0ed-8dccab506bca\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.123524 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415da4d0-c38a-48ff-a0ed-8dccab506bca-kube-api-access-7mz7m" (OuterVolumeSpecName: "kube-api-access-7mz7m") pod "415da4d0-c38a-48ff-a0ed-8dccab506bca" (UID: "415da4d0-c38a-48ff-a0ed-8dccab506bca"). InnerVolumeSpecName "kube-api-access-7mz7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.168483 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "415da4d0-c38a-48ff-a0ed-8dccab506bca" (UID: "415da4d0-c38a-48ff-a0ed-8dccab506bca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.183199 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mz7m\" (UniqueName: \"kubernetes.io/projected/415da4d0-c38a-48ff-a0ed-8dccab506bca-kube-api-access-7mz7m\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.183243 5031 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.196028 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "415da4d0-c38a-48ff-a0ed-8dccab506bca" (UID: "415da4d0-c38a-48ff-a0ed-8dccab506bca"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.199624 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "415da4d0-c38a-48ff-a0ed-8dccab506bca" (UID: "415da4d0-c38a-48ff-a0ed-8dccab506bca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.208682 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "415da4d0-c38a-48ff-a0ed-8dccab506bca" (UID: "415da4d0-c38a-48ff-a0ed-8dccab506bca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.216690 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-config" (OuterVolumeSpecName: "config") pod "415da4d0-c38a-48ff-a0ed-8dccab506bca" (UID: "415da4d0-c38a-48ff-a0ed-8dccab506bca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.289433 5031 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.289486 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.289500 5031 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.289513 5031 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415da4d0-c38a-48ff-a0ed-8dccab506bca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.411646 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492186 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-run-httpd\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492226 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-scripts\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492294 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-ceilometer-tls-certs\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492438 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-combined-ca-bundle\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492498 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-sg-core-conf-yaml\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492536 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfrtg\" (UniqueName: \"kubernetes.io/projected/96cf2a84-0927-4208-8959-96682bf54375-kube-api-access-gfrtg\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492644 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-config-data\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:43 crc kubenswrapper[5031]: I0129 09:29:43.492668 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-log-httpd\") pod \"96cf2a84-0927-4208-8959-96682bf54375\" (UID: \"96cf2a84-0927-4208-8959-96682bf54375\") " Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.505245 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.505864 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.509818 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96cf2a84-0927-4208-8959-96682bf54375-kube-api-access-gfrtg" (OuterVolumeSpecName: "kube-api-access-gfrtg") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "kube-api-access-gfrtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.509917 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-scripts" (OuterVolumeSpecName: "scripts") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.555249 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3da33a2c-cc44-487d-9679-d586a82652b8","Type":"ContainerStarted","Data":"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e"} Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.568330 5031 generic.go:334] "Generic (PLEG): container finished" podID="96cf2a84-0927-4208-8959-96682bf54375" containerID="aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991" exitCode=0 Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.568502 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.568925 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerDied","Data":"aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991"} Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.568964 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96cf2a84-0927-4208-8959-96682bf54375","Type":"ContainerDied","Data":"6b4c58cef4103e449cee55c7430a2420c269f5870d31303721b29535605277ed"} Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.568987 5031 scope.go:117] "RemoveContainer" containerID="04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.575817 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d75c5b5c-k4vm8" event={"ID":"b9752d31-4851-463a-9d9c-f27283dd5f54","Type":"ContainerDied","Data":"84a6743538678127fda32b60cc1061735c83f789ced6d05620f5985a641dc20c"} Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.575943 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d75c5b5c-k4vm8" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.595067 5031 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.595098 5031 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96cf2a84-0927-4208-8959-96682bf54375-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.595111 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.595123 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfrtg\" (UniqueName: \"kubernetes.io/projected/96cf2a84-0927-4208-8959-96682bf54375-kube-api-access-gfrtg\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.604359 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" event={"ID":"415da4d0-c38a-48ff-a0ed-8dccab506bca","Type":"ContainerDied","Data":"59025fade142afdcca41acb4b90aa7b5153f916ca3b1120990327b7da926fc1d"} Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.604928 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-vgw8k" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.610594 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.468203527 podStartE2EDuration="12.610572394s" podCreationTimestamp="2026-01-29 09:29:31 +0000 UTC" firstStartedPulling="2026-01-29 09:29:32.666634463 +0000 UTC m=+3053.166222415" lastFinishedPulling="2026-01-29 09:29:41.80900333 +0000 UTC m=+3062.308591282" observedRunningTime="2026-01-29 09:29:43.592999487 +0000 UTC m=+3064.092587459" watchObservedRunningTime="2026-01-29 09:29:43.610572394 +0000 UTC m=+3064.110160346" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.617596 5031 scope.go:117] "RemoveContainer" containerID="13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.617826 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.665551 5031 scope.go:117] "RemoveContainer" containerID="aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.665781 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.685426 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-vgw8k"] Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.697880 5031 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.697914 5031 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.699506 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.706523 5031 scope.go:117] "RemoveContainer" containerID="d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.714194 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-vgw8k"] Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.727357 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54d75c5b5c-k4vm8"] Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.734614 5031 scope.go:117] "RemoveContainer" containerID="04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:43.735388 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b\": container with ID starting with 04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b not found: ID does not exist" containerID="04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.735427 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b"} err="failed to get container status \"04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b\": rpc error: code = NotFound desc = could not find container \"04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b\": container with ID starting with 04b67305d79fdb033e3852edc6fdfbe185cebb4707fbae8d39d713a7129d375b not found: ID does not exist" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.735455 5031 scope.go:117] "RemoveContainer" containerID="13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:43.735828 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57\": container with ID starting with 13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57 not found: ID does not exist" containerID="13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.735855 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57"} err="failed to get container status \"13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57\": rpc error: code = NotFound desc = could not find container \"13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57\": container with ID starting with 13bd4cf33ab25cf0a348c85d134c5821f0ce94fa565fad24631fe21bd0f91c57 not found: ID does not exist" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.735872 5031 scope.go:117] "RemoveContainer" containerID="aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:43.736155 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991\": container with ID starting with aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991 not found: ID does not exist" containerID="aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.736175 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991"} err="failed to get container status \"aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991\": rpc error: code = NotFound desc = could not find container \"aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991\": container with ID starting with aec8db2fa89327f15c7d8801a78fa103fb930501abd2cafe8f327b0382b07991 not found: ID does not exist" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.736195 5031 scope.go:117] "RemoveContainer" containerID="d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:43.737076 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156\": container with ID starting with d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156 not found: ID does not exist" containerID="d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.737100 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156"} err="failed to get container status \"d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156\": rpc error: code = NotFound desc = could not find container \"d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156\": container with ID starting with d3173988e06161ea03906f2bd608a1cf7a62255ca796acd17a5b101d0b9f3156 not found: ID does not exist" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.737115 5031 scope.go:117] "RemoveContainer" containerID="fdc6a7a6321f4dc1b16f1f4c21efb15f116b3bb1f42b00571636d9d82419c3a6" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.739688 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54d75c5b5c-k4vm8"] Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.775625 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-config-data" (OuterVolumeSpecName: "config-data") pod "96cf2a84-0927-4208-8959-96682bf54375" (UID: "96cf2a84-0927-4208-8959-96682bf54375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.807234 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:43.807274 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96cf2a84-0927-4208-8959-96682bf54375-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.011887 5031 scope.go:117] "RemoveContainer" containerID="58ed655693c4ba29cce148da159a7281adfc8d7d16910978c1618e2b959258e1" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.014773 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.032443 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.072525 5031 scope.go:117] "RemoveContainer" containerID="541d39aaab80762a1903bd6d6d3ba809648d9bfec33ccc5156b026a0496091e5" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.078803 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079245 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079260 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079441 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon-log" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079448 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon-log" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079458 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-notification-agent" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079465 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-notification-agent" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079483 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon-log" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079491 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon-log" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079501 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-central-agent" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079508 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-central-agent" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079514 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079520 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079533 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="proxy-httpd" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079539 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="proxy-httpd" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079550 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerName="init" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079555 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerName="init" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079571 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="sg-core" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079576 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="sg-core" Jan 29 09:29:44 crc kubenswrapper[5031]: E0129 09:29:44.079591 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerName="dnsmasq-dns" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079596 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerName="dnsmasq-dns" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079770 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon-log" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079785 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079800 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="sg-core" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079810 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="proxy-httpd" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079819 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" containerName="horizon" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079828 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-central-agent" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079836 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cf2a84-0927-4208-8959-96682bf54375" containerName="ceilometer-notification-agent" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079842 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" containerName="horizon-log" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.079848 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="415da4d0-c38a-48ff-a0ed-8dccab506bca" containerName="dnsmasq-dns" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.081691 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.086150 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.086580 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.086903 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.087037 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.139347 5031 scope.go:117] "RemoveContainer" containerID="1cd72422f10bedd3e9139795b2d142915c6af961f6f37df39de766681a245c94" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.218860 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-log-httpd\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.219056 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-config-data\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.219105 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-run-httpd\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.219340 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-scripts\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.219417 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzk64\" (UniqueName: \"kubernetes.io/projected/27640798-ecc3-441f-abec-6ff47185919c-kube-api-access-mzk64\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.219452 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.219496 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.219544 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.297118 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415da4d0-c38a-48ff-a0ed-8dccab506bca" path="/var/lib/kubelet/pods/415da4d0-c38a-48ff-a0ed-8dccab506bca/volumes" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.298042 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eda84d3-0c58-4449-80e1-5198ecb37e22" path="/var/lib/kubelet/pods/5eda84d3-0c58-4449-80e1-5198ecb37e22/volumes" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.298806 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96cf2a84-0927-4208-8959-96682bf54375" path="/var/lib/kubelet/pods/96cf2a84-0927-4208-8959-96682bf54375/volumes" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.301128 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9752d31-4851-463a-9d9c-f27283dd5f54" path="/var/lib/kubelet/pods/b9752d31-4851-463a-9d9c-f27283dd5f54/volumes" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.321948 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-scripts\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.321990 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzk64\" (UniqueName: \"kubernetes.io/projected/27640798-ecc3-441f-abec-6ff47185919c-kube-api-access-mzk64\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.322028 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.322761 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.322838 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.322898 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-log-httpd\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.322949 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-config-data\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.322966 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-run-httpd\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.323940 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-run-httpd\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.324710 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-log-httpd\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.332557 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.342178 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.342329 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzk64\" (UniqueName: \"kubernetes.io/projected/27640798-ecc3-441f-abec-6ff47185919c-kube-api-access-mzk64\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.343058 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.345766 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-scripts\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.357469 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-config-data\") pod \"ceilometer-0\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.381698 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.436147 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.524248 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:44 crc kubenswrapper[5031]: I0129 09:29:44.963916 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:29:44 crc kubenswrapper[5031]: W0129 09:29:44.984763 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27640798_ecc3_441f_abec_6ff47185919c.slice/crio-2813047e442e3e247e10dbddb6f25a0b11b881fee42cf351ed4d06d4e28f97a6 WatchSource:0}: Error finding container 2813047e442e3e247e10dbddb6f25a0b11b881fee42cf351ed4d06d4e28f97a6: Status 404 returned error can't find the container with id 2813047e442e3e247e10dbddb6f25a0b11b881fee42cf351ed4d06d4e28f97a6 Jan 29 09:29:45 crc kubenswrapper[5031]: I0129 09:29:45.282357 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:29:45 crc kubenswrapper[5031]: E0129 09:29:45.282646 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:29:45 crc kubenswrapper[5031]: I0129 09:29:45.632802 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerStarted","Data":"2813047e442e3e247e10dbddb6f25a0b11b881fee42cf351ed4d06d4e28f97a6"} Jan 29 09:29:45 crc kubenswrapper[5031]: I0129 09:29:45.948096 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:29:46 crc kubenswrapper[5031]: I0129 09:29:46.439560 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:29:46 crc kubenswrapper[5031]: I0129 09:29:46.547869 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-b47759886-4vh7j" Jan 29 09:29:46 crc kubenswrapper[5031]: I0129 09:29:46.642893 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerStarted","Data":"bd32375e2ed9b43634c624a6a88d0d825608fecb06400aaea3031a510d1e9d18"} Jan 29 09:29:46 crc kubenswrapper[5031]: I0129 09:29:46.675525 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5df6bb9c74-nlm69"] Jan 29 09:29:46 crc kubenswrapper[5031]: I0129 09:29:46.675789 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5df6bb9c74-nlm69" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon-log" containerID="cri-o://2e32fa1359c13d0969696db27ec62a31f3c0af1897840cb4b6d5af323815d8a4" gracePeriod=30 Jan 29 09:29:46 crc kubenswrapper[5031]: I0129 09:29:46.676216 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5df6bb9c74-nlm69" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" containerID="cri-o://9ac22e104a84b3a5f265e5851d0123ca0b36600e3ee0d502b6982b6f242f7c07" gracePeriod=30 Jan 29 09:29:47 crc kubenswrapper[5031]: I0129 09:29:47.654215 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerStarted","Data":"d1173139539ef1ed3c5e36ec545d27139ff6ddafbf19b46c287357afa0c2fe9c"} Jan 29 09:29:48 crc kubenswrapper[5031]: I0129 09:29:48.665330 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerStarted","Data":"a2f737e7909204e503088977e0fbc381b0949dbbcde395cd4cfa962088fa1366"} Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.702023 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerStarted","Data":"b7647b048e7eefd53be1d5d26d7f9d82df5bc6d98f529131ff8991fd2dc65d4d"} Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.702663 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.702545 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="proxy-httpd" containerID="cri-o://b7647b048e7eefd53be1d5d26d7f9d82df5bc6d98f529131ff8991fd2dc65d4d" gracePeriod=30 Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.702191 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-central-agent" containerID="cri-o://bd32375e2ed9b43634c624a6a88d0d825608fecb06400aaea3031a510d1e9d18" gracePeriod=30 Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.702571 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="sg-core" containerID="cri-o://a2f737e7909204e503088977e0fbc381b0949dbbcde395cd4cfa962088fa1366" gracePeriod=30 Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.702588 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-notification-agent" containerID="cri-o://d1173139539ef1ed3c5e36ec545d27139ff6ddafbf19b46c287357afa0c2fe9c" gracePeriod=30 Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.709937 5031 generic.go:334] "Generic (PLEG): container finished" podID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerID="9ac22e104a84b3a5f265e5851d0123ca0b36600e3ee0d502b6982b6f242f7c07" exitCode=0 Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.710002 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5df6bb9c74-nlm69" event={"ID":"a88f18bd-1a15-4a57-8ee9-4457fbd15905","Type":"ContainerDied","Data":"9ac22e104a84b3a5f265e5851d0123ca0b36600e3ee0d502b6982b6f242f7c07"} Jan 29 09:29:50 crc kubenswrapper[5031]: I0129 09:29:50.748171 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.324935004 podStartE2EDuration="7.748142026s" podCreationTimestamp="2026-01-29 09:29:43 +0000 UTC" firstStartedPulling="2026-01-29 09:29:44.987242413 +0000 UTC m=+3065.486830365" lastFinishedPulling="2026-01-29 09:29:50.410449435 +0000 UTC m=+3070.910037387" observedRunningTime="2026-01-29 09:29:50.747090868 +0000 UTC m=+3071.246678830" watchObservedRunningTime="2026-01-29 09:29:50.748142026 +0000 UTC m=+3071.247729968" Jan 29 09:29:51 crc kubenswrapper[5031]: I0129 09:29:51.609852 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5df6bb9c74-nlm69" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Jan 29 09:29:51 crc kubenswrapper[5031]: I0129 09:29:51.721869 5031 generic.go:334] "Generic (PLEG): container finished" podID="27640798-ecc3-441f-abec-6ff47185919c" containerID="a2f737e7909204e503088977e0fbc381b0949dbbcde395cd4cfa962088fa1366" exitCode=2 Jan 29 09:29:51 crc kubenswrapper[5031]: I0129 09:29:51.721908 5031 generic.go:334] "Generic (PLEG): container finished" podID="27640798-ecc3-441f-abec-6ff47185919c" containerID="d1173139539ef1ed3c5e36ec545d27139ff6ddafbf19b46c287357afa0c2fe9c" exitCode=0 Jan 29 09:29:51 crc kubenswrapper[5031]: I0129 09:29:51.721930 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerDied","Data":"a2f737e7909204e503088977e0fbc381b0949dbbcde395cd4cfa962088fa1366"} Jan 29 09:29:51 crc kubenswrapper[5031]: I0129 09:29:51.721958 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerDied","Data":"d1173139539ef1ed3c5e36ec545d27139ff6ddafbf19b46c287357afa0c2fe9c"} Jan 29 09:29:51 crc kubenswrapper[5031]: I0129 09:29:51.925788 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 29 09:29:53 crc kubenswrapper[5031]: I0129 09:29:53.858637 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 29 09:29:53 crc kubenswrapper[5031]: I0129 09:29:53.944221 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:54 crc kubenswrapper[5031]: I0129 09:29:54.765667 5031 generic.go:334] "Generic (PLEG): container finished" podID="27640798-ecc3-441f-abec-6ff47185919c" containerID="bd32375e2ed9b43634c624a6a88d0d825608fecb06400aaea3031a510d1e9d18" exitCode=0 Jan 29 09:29:54 crc kubenswrapper[5031]: I0129 09:29:54.765715 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerDied","Data":"bd32375e2ed9b43634c624a6a88d0d825608fecb06400aaea3031a510d1e9d18"} Jan 29 09:29:54 crc kubenswrapper[5031]: I0129 09:29:54.766124 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="manila-scheduler" containerID="cri-o://bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117" gracePeriod=30 Jan 29 09:29:54 crc kubenswrapper[5031]: I0129 09:29:54.766249 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="probe" containerID="cri-o://f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882" gracePeriod=30 Jan 29 09:29:55 crc kubenswrapper[5031]: I0129 09:29:55.777347 5031 generic.go:334] "Generic (PLEG): container finished" podID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerID="f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882" exitCode=0 Jan 29 09:29:55 crc kubenswrapper[5031]: I0129 09:29:55.778442 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e","Type":"ContainerDied","Data":"f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882"} Jan 29 09:29:57 crc kubenswrapper[5031]: I0129 09:29:57.283064 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:29:57 crc kubenswrapper[5031]: E0129 09:29:57.283574 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:29:58 crc kubenswrapper[5031]: I0129 09:29:58.302263 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.293996 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.341036 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data\") pod \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.341180 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-scripts\") pod \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.341248 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-combined-ca-bundle\") pod \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.341287 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data-custom\") pod \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.341411 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-etc-machine-id\") pod \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.341459 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv8nx\" (UniqueName: \"kubernetes.io/projected/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-kube-api-access-vv8nx\") pod \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\" (UID: \"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e\") " Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.342622 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" (UID: "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.350911 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-kube-api-access-vv8nx" (OuterVolumeSpecName: "kube-api-access-vv8nx") pod "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" (UID: "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e"). InnerVolumeSpecName "kube-api-access-vv8nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.362704 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" (UID: "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.363768 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-scripts" (OuterVolumeSpecName: "scripts") pod "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" (UID: "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.409543 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" (UID: "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.443726 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.443764 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.443776 5031 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.443787 5031 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.443798 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv8nx\" (UniqueName: \"kubernetes.io/projected/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-kube-api-access-vv8nx\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.480753 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data" (OuterVolumeSpecName: "config-data") pod "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" (UID: "1cd9837f-6d67-4549-b604-3d1f3ee7bd5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.546172 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.814078 5031 generic.go:334] "Generic (PLEG): container finished" podID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerID="bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117" exitCode=0 Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.814137 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.814160 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e","Type":"ContainerDied","Data":"bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117"} Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.814497 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1cd9837f-6d67-4549-b604-3d1f3ee7bd5e","Type":"ContainerDied","Data":"b2984e763797e59d58eb94d73f8459acde17d6b05f2521cc398f2f826cb7d729"} Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.814521 5031 scope.go:117] "RemoveContainer" containerID="f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.835408 5031 scope.go:117] "RemoveContainer" containerID="bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.848766 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.858604 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.874466 5031 scope.go:117] "RemoveContainer" containerID="f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882" Jan 29 09:29:59 crc kubenswrapper[5031]: E0129 09:29:59.874897 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882\": container with ID starting with f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882 not found: ID does not exist" containerID="f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.874934 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882"} err="failed to get container status \"f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882\": rpc error: code = NotFound desc = could not find container \"f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882\": container with ID starting with f99f38660c4c82f01a2babb46601b40acecc2017d656918330795283b8084882 not found: ID does not exist" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.874961 5031 scope.go:117] "RemoveContainer" containerID="bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117" Jan 29 09:29:59 crc kubenswrapper[5031]: E0129 09:29:59.875375 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117\": container with ID starting with bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117 not found: ID does not exist" containerID="bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.875397 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117"} err="failed to get container status \"bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117\": rpc error: code = NotFound desc = could not find container \"bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117\": container with ID starting with bca1afb5599646c283b9c8baa8a8e13ca08596a4484931941c9ccac96cb7c117 not found: ID does not exist" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.888476 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:59 crc kubenswrapper[5031]: E0129 09:29:59.888913 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="manila-scheduler" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.888930 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="manila-scheduler" Jan 29 09:29:59 crc kubenswrapper[5031]: E0129 09:29:59.888958 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="probe" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.888964 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="probe" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.889145 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="manila-scheduler" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.889166 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" containerName="probe" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.891737 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.902792 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.904176 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.952924 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-scripts\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.952981 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/d4320ea6-3657-454b-b535-3776f405d823-kube-api-access-gfwxp\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.953009 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.953106 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.953132 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4320ea6-3657-454b-b535-3776f405d823-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:29:59 crc kubenswrapper[5031]: I0129 09:29:59.953198 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-config-data\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.055213 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-scripts\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.055270 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/d4320ea6-3657-454b-b535-3776f405d823-kube-api-access-gfwxp\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.055303 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.055374 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.055392 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4320ea6-3657-454b-b535-3776f405d823-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.055446 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-config-data\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.055566 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4320ea6-3657-454b-b535-3776f405d823-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.059053 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-scripts\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.059116 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.060218 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-config-data\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.060842 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4320ea6-3657-454b-b535-3776f405d823-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.077743 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/d4320ea6-3657-454b-b535-3776f405d823-kube-api-access-gfwxp\") pod \"manila-scheduler-0\" (UID: \"d4320ea6-3657-454b-b535-3776f405d823\") " pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.137015 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8"] Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.138207 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.140300 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.140440 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.154430 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8"] Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.249065 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.260353 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-config-volume\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.260459 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcw7q\" (UniqueName: \"kubernetes.io/projected/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-kube-api-access-vcw7q\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.260527 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-secret-volume\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.292988 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cd9837f-6d67-4549-b604-3d1f3ee7bd5e" path="/var/lib/kubelet/pods/1cd9837f-6d67-4549-b604-3d1f3ee7bd5e/volumes" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.362552 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-config-volume\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.362980 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcw7q\" (UniqueName: \"kubernetes.io/projected/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-kube-api-access-vcw7q\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.363004 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-secret-volume\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.364413 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-config-volume\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.371536 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-secret-volume\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.385314 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcw7q\" (UniqueName: \"kubernetes.io/projected/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-kube-api-access-vcw7q\") pod \"collect-profiles-29494650-ttqg8\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.470164 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.723170 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.830275 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d4320ea6-3657-454b-b535-3776f405d823","Type":"ContainerStarted","Data":"a584eecfb4806c91c8cdad40f7d394acb0685810692cbb767c0cd631c32f02a4"} Jan 29 09:30:00 crc kubenswrapper[5031]: I0129 09:30:00.910774 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8"] Jan 29 09:30:00 crc kubenswrapper[5031]: W0129 09:30:00.921223 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bb1f54f_db1f_4ef2_9f90_40cbedef72db.slice/crio-88756707eb65e885c4e1ae4345e07b258db6de29adad0b5dfd9c2691390a7e2a WatchSource:0}: Error finding container 88756707eb65e885c4e1ae4345e07b258db6de29adad0b5dfd9c2691390a7e2a: Status 404 returned error can't find the container with id 88756707eb65e885c4e1ae4345e07b258db6de29adad0b5dfd9c2691390a7e2a Jan 29 09:30:01 crc kubenswrapper[5031]: I0129 09:30:01.609802 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5df6bb9c74-nlm69" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Jan 29 09:30:01 crc kubenswrapper[5031]: I0129 09:30:01.843327 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d4320ea6-3657-454b-b535-3776f405d823","Type":"ContainerStarted","Data":"fda5570d8c0d071aa41f949596f382336fd9fc44c01cf7c106c42a65a6a0a6e1"} Jan 29 09:30:01 crc kubenswrapper[5031]: I0129 09:30:01.843446 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d4320ea6-3657-454b-b535-3776f405d823","Type":"ContainerStarted","Data":"eec770148fa38dfad0a833f0a778517c0e8a1869a60594d2872ec739f041079d"} Jan 29 09:30:01 crc kubenswrapper[5031]: I0129 09:30:01.846606 5031 generic.go:334] "Generic (PLEG): container finished" podID="0bb1f54f-db1f-4ef2-9f90-40cbedef72db" containerID="2f2921e9bec8a8b20e3ab6537a183797836d4b2e42bea68d53e29cf56dd4248c" exitCode=0 Jan 29 09:30:01 crc kubenswrapper[5031]: I0129 09:30:01.846714 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" event={"ID":"0bb1f54f-db1f-4ef2-9f90-40cbedef72db","Type":"ContainerDied","Data":"2f2921e9bec8a8b20e3ab6537a183797836d4b2e42bea68d53e29cf56dd4248c"} Jan 29 09:30:01 crc kubenswrapper[5031]: I0129 09:30:01.846781 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" event={"ID":"0bb1f54f-db1f-4ef2-9f90-40cbedef72db","Type":"ContainerStarted","Data":"88756707eb65e885c4e1ae4345e07b258db6de29adad0b5dfd9c2691390a7e2a"} Jan 29 09:30:01 crc kubenswrapper[5031]: I0129 09:30:01.893994 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.893972715 podStartE2EDuration="2.893972715s" podCreationTimestamp="2026-01-29 09:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:30:01.862265974 +0000 UTC m=+3082.361853946" watchObservedRunningTime="2026-01-29 09:30:01.893972715 +0000 UTC m=+3082.393560667" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.223460 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.341331 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-secret-volume\") pod \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.341540 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-config-volume\") pod \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.341624 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcw7q\" (UniqueName: \"kubernetes.io/projected/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-kube-api-access-vcw7q\") pod \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\" (UID: \"0bb1f54f-db1f-4ef2-9f90-40cbedef72db\") " Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.343319 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bb1f54f-db1f-4ef2-9f90-40cbedef72db" (UID: "0bb1f54f-db1f-4ef2-9f90-40cbedef72db"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.349747 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0bb1f54f-db1f-4ef2-9f90-40cbedef72db" (UID: "0bb1f54f-db1f-4ef2-9f90-40cbedef72db"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.354968 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-kube-api-access-vcw7q" (OuterVolumeSpecName: "kube-api-access-vcw7q") pod "0bb1f54f-db1f-4ef2-9f90-40cbedef72db" (UID: "0bb1f54f-db1f-4ef2-9f90-40cbedef72db"). InnerVolumeSpecName "kube-api-access-vcw7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.444277 5031 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.444496 5031 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.444506 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcw7q\" (UniqueName: \"kubernetes.io/projected/0bb1f54f-db1f-4ef2-9f90-40cbedef72db-kube-api-access-vcw7q\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.539894 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.581000 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.867678 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" event={"ID":"0bb1f54f-db1f-4ef2-9f90-40cbedef72db","Type":"ContainerDied","Data":"88756707eb65e885c4e1ae4345e07b258db6de29adad0b5dfd9c2691390a7e2a"} Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.867754 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88756707eb65e885c4e1ae4345e07b258db6de29adad0b5dfd9c2691390a7e2a" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.867693 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-ttqg8" Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.867846 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="manila-share" containerID="cri-o://2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9" gracePeriod=30 Jan 29 09:30:03 crc kubenswrapper[5031]: I0129 09:30:03.867959 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="probe" containerID="cri-o://de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e" gracePeriod=30 Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.298789 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb"] Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.307047 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-kzbwb"] Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.752740 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.870870 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-ceph\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.870921 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-scripts\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.871045 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n4c8\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-kube-api-access-9n4c8\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.871068 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-etc-machine-id\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.871173 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data-custom\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.871226 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-var-lib-manila\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.871251 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-combined-ca-bundle\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.871298 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data\") pod \"3da33a2c-cc44-487d-9679-d586a82652b8\" (UID: \"3da33a2c-cc44-487d-9679-d586a82652b8\") " Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.871845 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.872125 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.876286 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-kube-api-access-9n4c8" (OuterVolumeSpecName: "kube-api-access-9n4c8") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "kube-api-access-9n4c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.876963 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.879604 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-scripts" (OuterVolumeSpecName: "scripts") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.879830 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-ceph" (OuterVolumeSpecName: "ceph") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.884580 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.884613 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3da33a2c-cc44-487d-9679-d586a82652b8","Type":"ContainerDied","Data":"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e"} Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.884688 5031 scope.go:117] "RemoveContainer" containerID="de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.884439 5031 generic.go:334] "Generic (PLEG): container finished" podID="3da33a2c-cc44-487d-9679-d586a82652b8" containerID="de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e" exitCode=0 Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.885731 5031 generic.go:334] "Generic (PLEG): container finished" podID="3da33a2c-cc44-487d-9679-d586a82652b8" containerID="2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9" exitCode=1 Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.885749 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3da33a2c-cc44-487d-9679-d586a82652b8","Type":"ContainerDied","Data":"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9"} Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.885794 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3da33a2c-cc44-487d-9679-d586a82652b8","Type":"ContainerDied","Data":"5f259c745b4dc593f4899ee1ccf1f38ae13f99e581326810e283f09cc86fd630"} Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.931334 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.944221 5031 scope.go:117] "RemoveContainer" containerID="2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.963551 5031 scope.go:117] "RemoveContainer" containerID="de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e" Jan 29 09:30:04 crc kubenswrapper[5031]: E0129 09:30:04.964043 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e\": container with ID starting with de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e not found: ID does not exist" containerID="de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.964131 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e"} err="failed to get container status \"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e\": rpc error: code = NotFound desc = could not find container \"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e\": container with ID starting with de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e not found: ID does not exist" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.964176 5031 scope.go:117] "RemoveContainer" containerID="2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9" Jan 29 09:30:04 crc kubenswrapper[5031]: E0129 09:30:04.964975 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9\": container with ID starting with 2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9 not found: ID does not exist" containerID="2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.965124 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9"} err="failed to get container status \"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9\": rpc error: code = NotFound desc = could not find container \"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9\": container with ID starting with 2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9 not found: ID does not exist" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.965213 5031 scope.go:117] "RemoveContainer" containerID="de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.965668 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e"} err="failed to get container status \"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e\": rpc error: code = NotFound desc = could not find container \"de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e\": container with ID starting with de520f247f7c73dd8aff1d163dac8e9fb6858d5a4e7a7e02088ba9f147b4061e not found: ID does not exist" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.965703 5031 scope.go:117] "RemoveContainer" containerID="2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.965938 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9"} err="failed to get container status \"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9\": rpc error: code = NotFound desc = could not find container \"2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9\": container with ID starting with 2fa721662a63f5284b964b3e7980d628f35f724da0fafb99fa1f2215afc584a9 not found: ID does not exist" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.974486 5031 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.974527 5031 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.974542 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.974553 5031 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.974562 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.974573 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n4c8\" (UniqueName: \"kubernetes.io/projected/3da33a2c-cc44-487d-9679-d586a82652b8-kube-api-access-9n4c8\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.974583 5031 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da33a2c-cc44-487d-9679-d586a82652b8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[5031]: I0129 09:30:04.991519 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data" (OuterVolumeSpecName: "config-data") pod "3da33a2c-cc44-487d-9679-d586a82652b8" (UID: "3da33a2c-cc44-487d-9679-d586a82652b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.078113 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da33a2c-cc44-487d-9679-d586a82652b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.264725 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.274542 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.313878 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:30:05 crc kubenswrapper[5031]: E0129 09:30:05.316473 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="probe" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.316512 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="probe" Jan 29 09:30:05 crc kubenswrapper[5031]: E0129 09:30:05.316554 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb1f54f-db1f-4ef2-9f90-40cbedef72db" containerName="collect-profiles" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.316567 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb1f54f-db1f-4ef2-9f90-40cbedef72db" containerName="collect-profiles" Jan 29 09:30:05 crc kubenswrapper[5031]: E0129 09:30:05.316642 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="manila-share" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.316653 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="manila-share" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.317335 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb1f54f-db1f-4ef2-9f90-40cbedef72db" containerName="collect-profiles" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.317376 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="manila-share" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.317407 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" containerName="probe" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.320331 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.336714 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.354436 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383467 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383557 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-config-data\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383584 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvchh\" (UniqueName: \"kubernetes.io/projected/1ae94363-9689-48ed-8c8d-c1668fb5955a-kube-api-access-qvchh\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383830 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1ae94363-9689-48ed-8c8d-c1668fb5955a-ceph\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383854 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383930 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ae94363-9689-48ed-8c8d-c1668fb5955a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383947 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1ae94363-9689-48ed-8c8d-c1668fb5955a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.383988 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-scripts\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.485989 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1ae94363-9689-48ed-8c8d-c1668fb5955a-ceph\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486045 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486103 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ae94363-9689-48ed-8c8d-c1668fb5955a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486125 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1ae94363-9689-48ed-8c8d-c1668fb5955a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486160 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-scripts\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486258 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486328 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-config-data\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486379 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvchh\" (UniqueName: \"kubernetes.io/projected/1ae94363-9689-48ed-8c8d-c1668fb5955a-kube-api-access-qvchh\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486666 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1ae94363-9689-48ed-8c8d-c1668fb5955a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.486744 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ae94363-9689-48ed-8c8d-c1668fb5955a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.490569 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-scripts\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.490800 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.492997 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-config-data\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.494602 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae94363-9689-48ed-8c8d-c1668fb5955a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.499776 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1ae94363-9689-48ed-8c8d-c1668fb5955a-ceph\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.504906 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvchh\" (UniqueName: \"kubernetes.io/projected/1ae94363-9689-48ed-8c8d-c1668fb5955a-kube-api-access-qvchh\") pod \"manila-share-share1-0\" (UID: \"1ae94363-9689-48ed-8c8d-c1668fb5955a\") " pod="openstack/manila-share-share1-0" Jan 29 09:30:05 crc kubenswrapper[5031]: I0129 09:30:05.662548 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 09:30:06 crc kubenswrapper[5031]: I0129 09:30:06.188183 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 09:30:06 crc kubenswrapper[5031]: W0129 09:30:06.189301 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ae94363_9689_48ed_8c8d_c1668fb5955a.slice/crio-f0f6b10dc0fc78b6c3d46ca9e5678ebfd1ba6adf854a3cc92f9aca40f3baad81 WatchSource:0}: Error finding container f0f6b10dc0fc78b6c3d46ca9e5678ebfd1ba6adf854a3cc92f9aca40f3baad81: Status 404 returned error can't find the container with id f0f6b10dc0fc78b6c3d46ca9e5678ebfd1ba6adf854a3cc92f9aca40f3baad81 Jan 29 09:30:06 crc kubenswrapper[5031]: I0129 09:30:06.299948 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0a310f-43ae-4cfe-abfd-d90b36b691ec" path="/var/lib/kubelet/pods/0c0a310f-43ae-4cfe-abfd-d90b36b691ec/volumes" Jan 29 09:30:06 crc kubenswrapper[5031]: I0129 09:30:06.301079 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da33a2c-cc44-487d-9679-d586a82652b8" path="/var/lib/kubelet/pods/3da33a2c-cc44-487d-9679-d586a82652b8/volumes" Jan 29 09:30:06 crc kubenswrapper[5031]: I0129 09:30:06.924915 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1ae94363-9689-48ed-8c8d-c1668fb5955a","Type":"ContainerStarted","Data":"481a4709789fb299284216bd2a413d191671661ff79cd2f5ec75b223f7dc0e80"} Jan 29 09:30:06 crc kubenswrapper[5031]: I0129 09:30:06.924977 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1ae94363-9689-48ed-8c8d-c1668fb5955a","Type":"ContainerStarted","Data":"f0f6b10dc0fc78b6c3d46ca9e5678ebfd1ba6adf854a3cc92f9aca40f3baad81"} Jan 29 09:30:07 crc kubenswrapper[5031]: I0129 09:30:07.936554 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1ae94363-9689-48ed-8c8d-c1668fb5955a","Type":"ContainerStarted","Data":"d2d7b9261dbf8a49caa8ebcddef0d57f49e9a4e1df60f4d6d9ad23ba8e9aefc0"} Jan 29 09:30:07 crc kubenswrapper[5031]: I0129 09:30:07.976129 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.976099482 podStartE2EDuration="2.976099482s" podCreationTimestamp="2026-01-29 09:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:30:07.959568803 +0000 UTC m=+3088.459156765" watchObservedRunningTime="2026-01-29 09:30:07.976099482 +0000 UTC m=+3088.475687434" Jan 29 09:30:09 crc kubenswrapper[5031]: I0129 09:30:09.854064 5031 scope.go:117] "RemoveContainer" containerID="4da9e29c632601898d8ee1ba070040d7cb54dcdc6c5a971500f9c890942ac9ef" Jan 29 09:30:10 crc kubenswrapper[5031]: I0129 09:30:10.249964 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 29 09:30:11 crc kubenswrapper[5031]: I0129 09:30:11.609650 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5df6bb9c74-nlm69" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Jan 29 09:30:11 crc kubenswrapper[5031]: I0129 09:30:11.610001 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:30:12 crc kubenswrapper[5031]: I0129 09:30:12.283699 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:30:12 crc kubenswrapper[5031]: E0129 09:30:12.284654 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:30:14 crc kubenswrapper[5031]: I0129 09:30:14.447678 5031 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:30:15 crc kubenswrapper[5031]: I0129 09:30:15.662829 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.034332 5031 generic.go:334] "Generic (PLEG): container finished" podID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerID="2e32fa1359c13d0969696db27ec62a31f3c0af1897840cb4b6d5af323815d8a4" exitCode=137 Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.034414 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5df6bb9c74-nlm69" event={"ID":"a88f18bd-1a15-4a57-8ee9-4457fbd15905","Type":"ContainerDied","Data":"2e32fa1359c13d0969696db27ec62a31f3c0af1897840cb4b6d5af323815d8a4"} Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.034640 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5df6bb9c74-nlm69" event={"ID":"a88f18bd-1a15-4a57-8ee9-4457fbd15905","Type":"ContainerDied","Data":"946ffc8ecac4e18d4794fd6107bffa15125486b231b219ab22a081d2ba3baffe"} Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.034658 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946ffc8ecac4e18d4794fd6107bffa15125486b231b219ab22a081d2ba3baffe" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.064980 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.221003 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-combined-ca-bundle\") pod \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.221545 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88f18bd-1a15-4a57-8ee9-4457fbd15905-logs\") pod \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.221602 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6h2p\" (UniqueName: \"kubernetes.io/projected/a88f18bd-1a15-4a57-8ee9-4457fbd15905-kube-api-access-b6h2p\") pod \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.221681 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-secret-key\") pod \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.221714 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-tls-certs\") pod \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.221788 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-scripts\") pod \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.221875 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-config-data\") pod \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\" (UID: \"a88f18bd-1a15-4a57-8ee9-4457fbd15905\") " Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.222881 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a88f18bd-1a15-4a57-8ee9-4457fbd15905-logs" (OuterVolumeSpecName: "logs") pod "a88f18bd-1a15-4a57-8ee9-4457fbd15905" (UID: "a88f18bd-1a15-4a57-8ee9-4457fbd15905"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.228978 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88f18bd-1a15-4a57-8ee9-4457fbd15905-kube-api-access-b6h2p" (OuterVolumeSpecName: "kube-api-access-b6h2p") pod "a88f18bd-1a15-4a57-8ee9-4457fbd15905" (UID: "a88f18bd-1a15-4a57-8ee9-4457fbd15905"). InnerVolumeSpecName "kube-api-access-b6h2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.230810 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a88f18bd-1a15-4a57-8ee9-4457fbd15905" (UID: "a88f18bd-1a15-4a57-8ee9-4457fbd15905"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.256039 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-config-data" (OuterVolumeSpecName: "config-data") pod "a88f18bd-1a15-4a57-8ee9-4457fbd15905" (UID: "a88f18bd-1a15-4a57-8ee9-4457fbd15905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.257739 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a88f18bd-1a15-4a57-8ee9-4457fbd15905" (UID: "a88f18bd-1a15-4a57-8ee9-4457fbd15905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.259331 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-scripts" (OuterVolumeSpecName: "scripts") pod "a88f18bd-1a15-4a57-8ee9-4457fbd15905" (UID: "a88f18bd-1a15-4a57-8ee9-4457fbd15905"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.291273 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "a88f18bd-1a15-4a57-8ee9-4457fbd15905" (UID: "a88f18bd-1a15-4a57-8ee9-4457fbd15905"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.324972 5031 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.325214 5031 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.325271 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.325323 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a88f18bd-1a15-4a57-8ee9-4457fbd15905-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.325395 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88f18bd-1a15-4a57-8ee9-4457fbd15905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.325451 5031 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88f18bd-1a15-4a57-8ee9-4457fbd15905-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:17 crc kubenswrapper[5031]: I0129 09:30:17.325503 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6h2p\" (UniqueName: \"kubernetes.io/projected/a88f18bd-1a15-4a57-8ee9-4457fbd15905-kube-api-access-b6h2p\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:18 crc kubenswrapper[5031]: I0129 09:30:18.042109 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5df6bb9c74-nlm69" Jan 29 09:30:18 crc kubenswrapper[5031]: I0129 09:30:18.084758 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5df6bb9c74-nlm69"] Jan 29 09:30:18 crc kubenswrapper[5031]: I0129 09:30:18.097714 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5df6bb9c74-nlm69"] Jan 29 09:30:18 crc kubenswrapper[5031]: I0129 09:30:18.300821 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" path="/var/lib/kubelet/pods/a88f18bd-1a15-4a57-8ee9-4457fbd15905/volumes" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.072436 5031 generic.go:334] "Generic (PLEG): container finished" podID="27640798-ecc3-441f-abec-6ff47185919c" containerID="b7647b048e7eefd53be1d5d26d7f9d82df5bc6d98f529131ff8991fd2dc65d4d" exitCode=137 Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.072557 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerDied","Data":"b7647b048e7eefd53be1d5d26d7f9d82df5bc6d98f529131ff8991fd2dc65d4d"} Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.073007 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27640798-ecc3-441f-abec-6ff47185919c","Type":"ContainerDied","Data":"2813047e442e3e247e10dbddb6f25a0b11b881fee42cf351ed4d06d4e28f97a6"} Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.073027 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2813047e442e3e247e10dbddb6f25a0b11b881fee42cf351ed4d06d4e28f97a6" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.148885 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329002 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-run-httpd\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329546 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-log-httpd\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329591 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-config-data\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329635 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-ceilometer-tls-certs\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329683 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzk64\" (UniqueName: \"kubernetes.io/projected/27640798-ecc3-441f-abec-6ff47185919c-kube-api-access-mzk64\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329722 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-scripts\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329807 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-combined-ca-bundle\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329861 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-sg-core-conf-yaml\") pod \"27640798-ecc3-441f-abec-6ff47185919c\" (UID: \"27640798-ecc3-441f-abec-6ff47185919c\") " Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.329849 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.330512 5031 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.331406 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.334986 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27640798-ecc3-441f-abec-6ff47185919c-kube-api-access-mzk64" (OuterVolumeSpecName: "kube-api-access-mzk64") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "kube-api-access-mzk64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.335287 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-scripts" (OuterVolumeSpecName: "scripts") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.364809 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.388256 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.425619 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.436771 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-config-data" (OuterVolumeSpecName: "config-data") pod "27640798-ecc3-441f-abec-6ff47185919c" (UID: "27640798-ecc3-441f-abec-6ff47185919c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.437510 5031 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27640798-ecc3-441f-abec-6ff47185919c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.437532 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.437541 5031 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.437552 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzk64\" (UniqueName: \"kubernetes.io/projected/27640798-ecc3-441f-abec-6ff47185919c-kube-api-access-mzk64\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.437559 5031 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.437567 5031 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.437575 5031 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27640798-ecc3-441f-abec-6ff47185919c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:21 crc kubenswrapper[5031]: I0129 09:30:21.911978 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.081217 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.114318 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.123551 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144326 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:30:22 crc kubenswrapper[5031]: E0129 09:30:22.144716 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-notification-agent" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144735 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-notification-agent" Jan 29 09:30:22 crc kubenswrapper[5031]: E0129 09:30:22.144749 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="proxy-httpd" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144758 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="proxy-httpd" Jan 29 09:30:22 crc kubenswrapper[5031]: E0129 09:30:22.144770 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-central-agent" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144776 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-central-agent" Jan 29 09:30:22 crc kubenswrapper[5031]: E0129 09:30:22.144784 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon-log" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144789 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon-log" Jan 29 09:30:22 crc kubenswrapper[5031]: E0129 09:30:22.144814 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="sg-core" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144820 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="sg-core" Jan 29 09:30:22 crc kubenswrapper[5031]: E0129 09:30:22.144829 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144835 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.144998 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="sg-core" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.145016 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon-log" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.145024 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88f18bd-1a15-4a57-8ee9-4457fbd15905" containerName="horizon" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.145034 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-notification-agent" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.145041 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="ceilometer-central-agent" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.145051 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="27640798-ecc3-441f-abec-6ff47185919c" containerName="proxy-httpd" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.147125 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.149722 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.150089 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.150268 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168206 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168252 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-scripts\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168295 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8949618-20d4-4cd9-8b4b-6abcf3684676-log-httpd\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168342 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168361 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8949618-20d4-4cd9-8b4b-6abcf3684676-run-httpd\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168444 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-config-data\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168465 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.168518 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/f8949618-20d4-4cd9-8b4b-6abcf3684676-kube-api-access-2pgm6\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.169958 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.270802 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-config-data\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.271127 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.271776 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/f8949618-20d4-4cd9-8b4b-6abcf3684676-kube-api-access-2pgm6\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.271876 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.271915 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-scripts\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.272009 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8949618-20d4-4cd9-8b4b-6abcf3684676-log-httpd\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.272113 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.272186 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8949618-20d4-4cd9-8b4b-6abcf3684676-run-httpd\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.272811 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8949618-20d4-4cd9-8b4b-6abcf3684676-log-httpd\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.272907 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8949618-20d4-4cd9-8b4b-6abcf3684676-run-httpd\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.275143 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.275886 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-config-data\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.277718 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.278208 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-scripts\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.279569 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8949618-20d4-4cd9-8b4b-6abcf3684676-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.293403 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27640798-ecc3-441f-abec-6ff47185919c" path="/var/lib/kubelet/pods/27640798-ecc3-441f-abec-6ff47185919c/volumes" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.294912 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/f8949618-20d4-4cd9-8b4b-6abcf3684676-kube-api-access-2pgm6\") pod \"ceilometer-0\" (UID: \"f8949618-20d4-4cd9-8b4b-6abcf3684676\") " pod="openstack/ceilometer-0" Jan 29 09:30:22 crc kubenswrapper[5031]: I0129 09:30:22.467488 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:30:23 crc kubenswrapper[5031]: W0129 09:30:23.061623 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8949618_20d4_4cd9_8b4b_6abcf3684676.slice/crio-08a6fa68fb62beb37ddf7e16352b95cd9ab8f416096ab2b30407ffdf66e2677b WatchSource:0}: Error finding container 08a6fa68fb62beb37ddf7e16352b95cd9ab8f416096ab2b30407ffdf66e2677b: Status 404 returned error can't find the container with id 08a6fa68fb62beb37ddf7e16352b95cd9ab8f416096ab2b30407ffdf66e2677b Jan 29 09:30:23 crc kubenswrapper[5031]: I0129 09:30:23.064057 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:30:23 crc kubenswrapper[5031]: I0129 09:30:23.090266 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8949618-20d4-4cd9-8b4b-6abcf3684676","Type":"ContainerStarted","Data":"08a6fa68fb62beb37ddf7e16352b95cd9ab8f416096ab2b30407ffdf66e2677b"} Jan 29 09:30:24 crc kubenswrapper[5031]: I0129 09:30:24.101106 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8949618-20d4-4cd9-8b4b-6abcf3684676","Type":"ContainerStarted","Data":"dc01821204d6b5f4674a624da8216d30e53514155d7e9bd454fa5df1b9b62e24"} Jan 29 09:30:24 crc kubenswrapper[5031]: I0129 09:30:24.282481 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:30:24 crc kubenswrapper[5031]: E0129 09:30:24.282885 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:30:25 crc kubenswrapper[5031]: I0129 09:30:25.117027 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8949618-20d4-4cd9-8b4b-6abcf3684676","Type":"ContainerStarted","Data":"6a1286407fc21d5fd87bfa7aa56142de1f722f831219d342f408f87670059d64"} Jan 29 09:30:26 crc kubenswrapper[5031]: I0129 09:30:26.131960 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8949618-20d4-4cd9-8b4b-6abcf3684676","Type":"ContainerStarted","Data":"7b3993fd4ca2eed866d154ff6abbd7081f2e20d8d53a32d300ef8b1d532aa854"} Jan 29 09:30:27 crc kubenswrapper[5031]: I0129 09:30:27.448386 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 29 09:30:28 crc kubenswrapper[5031]: I0129 09:30:28.152320 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8949618-20d4-4cd9-8b4b-6abcf3684676","Type":"ContainerStarted","Data":"c42eb6b9ffccb9fdc3e1d0e23328e694b641d4c2a7f8521437912211b22eb67a"} Jan 29 09:30:28 crc kubenswrapper[5031]: I0129 09:30:28.152854 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:30:28 crc kubenswrapper[5031]: I0129 09:30:28.208845 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.374325878 podStartE2EDuration="6.208805905s" podCreationTimestamp="2026-01-29 09:30:22 +0000 UTC" firstStartedPulling="2026-01-29 09:30:23.063942108 +0000 UTC m=+3103.563530060" lastFinishedPulling="2026-01-29 09:30:26.898422135 +0000 UTC m=+3107.398010087" observedRunningTime="2026-01-29 09:30:28.197216567 +0000 UTC m=+3108.696804519" watchObservedRunningTime="2026-01-29 09:30:28.208805905 +0000 UTC m=+3108.708393857" Jan 29 09:30:37 crc kubenswrapper[5031]: I0129 09:30:37.283084 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:30:37 crc kubenswrapper[5031]: E0129 09:30:37.284341 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:30:49 crc kubenswrapper[5031]: I0129 09:30:49.284238 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:30:49 crc kubenswrapper[5031]: E0129 09:30:49.285523 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:30:52 crc kubenswrapper[5031]: I0129 09:30:52.474304 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:31:04 crc kubenswrapper[5031]: I0129 09:31:04.283033 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:31:04 crc kubenswrapper[5031]: E0129 09:31:04.283994 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:31:16 crc kubenswrapper[5031]: I0129 09:31:16.283026 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:31:16 crc kubenswrapper[5031]: E0129 09:31:16.285173 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:31:28 crc kubenswrapper[5031]: I0129 09:31:28.282151 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:31:28 crc kubenswrapper[5031]: E0129 09:31:28.282830 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:31:42 crc kubenswrapper[5031]: I0129 09:31:42.282638 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:31:42 crc kubenswrapper[5031]: E0129 09:31:42.283540 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.195254 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.196479 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.200287 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.200553 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bhhzs" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.202116 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.202350 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.228398 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.259821 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.260030 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.260166 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362324 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362433 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362505 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362540 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362573 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362610 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkwd\" (UniqueName: \"kubernetes.io/projected/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-kube-api-access-vvkwd\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362637 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362660 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.362685 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.363663 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.364023 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.382321 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.464078 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.464242 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.464295 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.464347 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.464397 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkwd\" (UniqueName: \"kubernetes.io/projected/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-kube-api-access-vvkwd\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.464431 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.465070 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.465328 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.465864 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.470692 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.472184 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.484954 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkwd\" (UniqueName: \"kubernetes.io/projected/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-kube-api-access-vvkwd\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.505712 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " pod="openstack/tempest-tests-tempest" Jan 29 09:31:43 crc kubenswrapper[5031]: I0129 09:31:43.531817 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:31:44 crc kubenswrapper[5031]: I0129 09:31:43.998878 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:31:44 crc kubenswrapper[5031]: W0129 09:31:44.004666 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9aaab885_ceb7_4fa0_bfe5_87da9d8bb76e.slice/crio-513137d849975dfa675886ecb05039893d724634940e736b9bb2eeb4863e48d6 WatchSource:0}: Error finding container 513137d849975dfa675886ecb05039893d724634940e736b9bb2eeb4863e48d6: Status 404 returned error can't find the container with id 513137d849975dfa675886ecb05039893d724634940e736b9bb2eeb4863e48d6 Jan 29 09:31:44 crc kubenswrapper[5031]: I0129 09:31:44.006983 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:31:44 crc kubenswrapper[5031]: I0129 09:31:44.895499 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e","Type":"ContainerStarted","Data":"513137d849975dfa675886ecb05039893d724634940e736b9bb2eeb4863e48d6"} Jan 29 09:31:55 crc kubenswrapper[5031]: I0129 09:31:55.283837 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:31:55 crc kubenswrapper[5031]: E0129 09:31:55.284692 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:32:09 crc kubenswrapper[5031]: I0129 09:32:09.283253 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:32:09 crc kubenswrapper[5031]: E0129 09:32:09.284010 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:32:11 crc kubenswrapper[5031]: E0129 09:32:11.263689 5031 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 29 09:32:11 crc kubenswrapper[5031]: E0129 09:32:11.264207 5031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvkwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:32:11 crc kubenswrapper[5031]: E0129 09:32:11.265493 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" Jan 29 09:32:12 crc kubenswrapper[5031]: E0129 09:32:12.188603 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" Jan 29 09:32:20 crc kubenswrapper[5031]: I0129 09:32:20.291013 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:32:20 crc kubenswrapper[5031]: E0129 09:32:20.292073 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:32:30 crc kubenswrapper[5031]: I0129 09:32:30.386439 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e","Type":"ContainerStarted","Data":"9011dd310a1a4cfb9a68e0a8e18b2e961a298b12318f78acbc24aa288d17c709"} Jan 29 09:32:30 crc kubenswrapper[5031]: I0129 09:32:30.434879 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.67833588 podStartE2EDuration="48.434856283s" podCreationTimestamp="2026-01-29 09:31:42 +0000 UTC" firstStartedPulling="2026-01-29 09:31:44.006761795 +0000 UTC m=+3184.506349747" lastFinishedPulling="2026-01-29 09:32:28.763282198 +0000 UTC m=+3229.262870150" observedRunningTime="2026-01-29 09:32:30.422822613 +0000 UTC m=+3230.922410605" watchObservedRunningTime="2026-01-29 09:32:30.434856283 +0000 UTC m=+3230.934444245" Jan 29 09:32:31 crc kubenswrapper[5031]: I0129 09:32:31.282399 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:32:31 crc kubenswrapper[5031]: E0129 09:32:31.283128 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:32:42 crc kubenswrapper[5031]: I0129 09:32:42.283381 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:32:42 crc kubenswrapper[5031]: I0129 09:32:42.504286 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"a7270cec15a957c2029d22962e4647ab60cfb192751d9117ef305ce5cc990f36"} Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.716709 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pxm8j"] Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.721688 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.735999 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxm8j"] Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.803212 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-catalog-content\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.803432 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-utilities\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.803461 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dmz\" (UniqueName: \"kubernetes.io/projected/4211a75e-1b12-403b-8337-871edbda8eef-kube-api-access-j5dmz\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.905213 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-catalog-content\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.905355 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-utilities\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.905395 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dmz\" (UniqueName: \"kubernetes.io/projected/4211a75e-1b12-403b-8337-871edbda8eef-kube-api-access-j5dmz\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.905842 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-catalog-content\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.905948 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-utilities\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:39 crc kubenswrapper[5031]: I0129 09:33:39.934279 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dmz\" (UniqueName: \"kubernetes.io/projected/4211a75e-1b12-403b-8337-871edbda8eef-kube-api-access-j5dmz\") pod \"redhat-operators-pxm8j\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:40 crc kubenswrapper[5031]: I0129 09:33:40.042513 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:40 crc kubenswrapper[5031]: I0129 09:33:40.540425 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxm8j"] Jan 29 09:33:41 crc kubenswrapper[5031]: I0129 09:33:41.047163 5031 generic.go:334] "Generic (PLEG): container finished" podID="4211a75e-1b12-403b-8337-871edbda8eef" containerID="f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7" exitCode=0 Jan 29 09:33:41 crc kubenswrapper[5031]: I0129 09:33:41.047215 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxm8j" event={"ID":"4211a75e-1b12-403b-8337-871edbda8eef","Type":"ContainerDied","Data":"f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7"} Jan 29 09:33:41 crc kubenswrapper[5031]: I0129 09:33:41.047250 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxm8j" event={"ID":"4211a75e-1b12-403b-8337-871edbda8eef","Type":"ContainerStarted","Data":"245f2d971d135f68a9bea2463550fc01b01f3bf9163c7527c55ec2def60eabfb"} Jan 29 09:33:42 crc kubenswrapper[5031]: I0129 09:33:42.059008 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxm8j" event={"ID":"4211a75e-1b12-403b-8337-871edbda8eef","Type":"ContainerStarted","Data":"70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674"} Jan 29 09:33:45 crc kubenswrapper[5031]: I0129 09:33:45.087741 5031 generic.go:334] "Generic (PLEG): container finished" podID="4211a75e-1b12-403b-8337-871edbda8eef" containerID="70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674" exitCode=0 Jan 29 09:33:45 crc kubenswrapper[5031]: I0129 09:33:45.087841 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxm8j" event={"ID":"4211a75e-1b12-403b-8337-871edbda8eef","Type":"ContainerDied","Data":"70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674"} Jan 29 09:33:46 crc kubenswrapper[5031]: I0129 09:33:46.122830 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxm8j" event={"ID":"4211a75e-1b12-403b-8337-871edbda8eef","Type":"ContainerStarted","Data":"ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b"} Jan 29 09:33:46 crc kubenswrapper[5031]: I0129 09:33:46.154803 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pxm8j" podStartSLOduration=2.714744691 podStartE2EDuration="7.154778637s" podCreationTimestamp="2026-01-29 09:33:39 +0000 UTC" firstStartedPulling="2026-01-29 09:33:41.049911909 +0000 UTC m=+3301.549499861" lastFinishedPulling="2026-01-29 09:33:45.489945855 +0000 UTC m=+3305.989533807" observedRunningTime="2026-01-29 09:33:46.144480342 +0000 UTC m=+3306.644068314" watchObservedRunningTime="2026-01-29 09:33:46.154778637 +0000 UTC m=+3306.654366599" Jan 29 09:33:50 crc kubenswrapper[5031]: I0129 09:33:50.043497 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:50 crc kubenswrapper[5031]: I0129 09:33:50.044015 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:33:51 crc kubenswrapper[5031]: I0129 09:33:51.148246 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pxm8j" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="registry-server" probeResult="failure" output=< Jan 29 09:33:51 crc kubenswrapper[5031]: timeout: failed to connect service ":50051" within 1s Jan 29 09:33:51 crc kubenswrapper[5031]: > Jan 29 09:34:00 crc kubenswrapper[5031]: I0129 09:34:00.100289 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:34:00 crc kubenswrapper[5031]: I0129 09:34:00.151989 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:34:00 crc kubenswrapper[5031]: I0129 09:34:00.342863 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pxm8j"] Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.264303 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pxm8j" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="registry-server" containerID="cri-o://ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b" gracePeriod=2 Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.876309 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.974667 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-catalog-content\") pod \"4211a75e-1b12-403b-8337-871edbda8eef\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.974823 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5dmz\" (UniqueName: \"kubernetes.io/projected/4211a75e-1b12-403b-8337-871edbda8eef-kube-api-access-j5dmz\") pod \"4211a75e-1b12-403b-8337-871edbda8eef\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.974915 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-utilities\") pod \"4211a75e-1b12-403b-8337-871edbda8eef\" (UID: \"4211a75e-1b12-403b-8337-871edbda8eef\") " Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.975496 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-utilities" (OuterVolumeSpecName: "utilities") pod "4211a75e-1b12-403b-8337-871edbda8eef" (UID: "4211a75e-1b12-403b-8337-871edbda8eef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.975702 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:01 crc kubenswrapper[5031]: I0129 09:34:01.980965 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4211a75e-1b12-403b-8337-871edbda8eef-kube-api-access-j5dmz" (OuterVolumeSpecName: "kube-api-access-j5dmz") pod "4211a75e-1b12-403b-8337-871edbda8eef" (UID: "4211a75e-1b12-403b-8337-871edbda8eef"). InnerVolumeSpecName "kube-api-access-j5dmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.082647 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5dmz\" (UniqueName: \"kubernetes.io/projected/4211a75e-1b12-403b-8337-871edbda8eef-kube-api-access-j5dmz\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.113053 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4211a75e-1b12-403b-8337-871edbda8eef" (UID: "4211a75e-1b12-403b-8337-871edbda8eef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.184895 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211a75e-1b12-403b-8337-871edbda8eef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.273725 5031 generic.go:334] "Generic (PLEG): container finished" podID="4211a75e-1b12-403b-8337-871edbda8eef" containerID="ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b" exitCode=0 Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.273933 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxm8j" event={"ID":"4211a75e-1b12-403b-8337-871edbda8eef","Type":"ContainerDied","Data":"ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b"} Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.274064 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxm8j" event={"ID":"4211a75e-1b12-403b-8337-871edbda8eef","Type":"ContainerDied","Data":"245f2d971d135f68a9bea2463550fc01b01f3bf9163c7527c55ec2def60eabfb"} Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.273999 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxm8j" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.274082 5031 scope.go:117] "RemoveContainer" containerID="ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.313627 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pxm8j"] Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.314125 5031 scope.go:117] "RemoveContainer" containerID="70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.321849 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pxm8j"] Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.335196 5031 scope.go:117] "RemoveContainer" containerID="f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.373844 5031 scope.go:117] "RemoveContainer" containerID="ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b" Jan 29 09:34:02 crc kubenswrapper[5031]: E0129 09:34:02.374322 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b\": container with ID starting with ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b not found: ID does not exist" containerID="ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.374352 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b"} err="failed to get container status \"ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b\": rpc error: code = NotFound desc = could not find container \"ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b\": container with ID starting with ad0ccd1beccd5d8c14b5cef983682687fa622663a0657d67f057a3e63760527b not found: ID does not exist" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.374399 5031 scope.go:117] "RemoveContainer" containerID="70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674" Jan 29 09:34:02 crc kubenswrapper[5031]: E0129 09:34:02.374834 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674\": container with ID starting with 70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674 not found: ID does not exist" containerID="70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.374894 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674"} err="failed to get container status \"70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674\": rpc error: code = NotFound desc = could not find container \"70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674\": container with ID starting with 70869def7c96ed61c2c7501719233448053295c0ce26a86237430542b0515674 not found: ID does not exist" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.374916 5031 scope.go:117] "RemoveContainer" containerID="f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7" Jan 29 09:34:02 crc kubenswrapper[5031]: E0129 09:34:02.375280 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7\": container with ID starting with f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7 not found: ID does not exist" containerID="f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7" Jan 29 09:34:02 crc kubenswrapper[5031]: I0129 09:34:02.375303 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7"} err="failed to get container status \"f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7\": rpc error: code = NotFound desc = could not find container \"f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7\": container with ID starting with f7a8422becaacf81fe6620b907a768fdecdfdb600848cce730fa10f5fbe8fbf7 not found: ID does not exist" Jan 29 09:34:04 crc kubenswrapper[5031]: I0129 09:34:04.301461 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4211a75e-1b12-403b-8337-871edbda8eef" path="/var/lib/kubelet/pods/4211a75e-1b12-403b-8337-871edbda8eef/volumes" Jan 29 09:34:41 crc kubenswrapper[5031]: I0129 09:34:41.661414 5031 generic.go:334] "Generic (PLEG): container finished" podID="9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" containerID="9011dd310a1a4cfb9a68e0a8e18b2e961a298b12318f78acbc24aa288d17c709" exitCode=0 Jan 29 09:34:41 crc kubenswrapper[5031]: I0129 09:34:41.661524 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e","Type":"ContainerDied","Data":"9011dd310a1a4cfb9a68e0a8e18b2e961a298b12318f78acbc24aa288d17c709"} Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.111570 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.297216 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config-secret\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.297291 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.297475 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ca-certs\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.297533 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-temporary\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.297559 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvkwd\" (UniqueName: \"kubernetes.io/projected/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-kube-api-access-vvkwd\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.298137 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-workdir\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.299281 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.299326 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-config-data\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.299448 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ssh-key\") pod \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\" (UID: \"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e\") " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.298077 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.300699 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-config-data" (OuterVolumeSpecName: "config-data") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.303948 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-kube-api-access-vvkwd" (OuterVolumeSpecName: "kube-api-access-vvkwd") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "kube-api-access-vvkwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.306688 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.307040 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.343886 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.357821 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.359458 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.370207 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" (UID: "9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402292 5031 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402335 5031 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402390 5031 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402402 5031 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402413 5031 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402424 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvkwd\" (UniqueName: \"kubernetes.io/projected/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-kube-api-access-vvkwd\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402433 5031 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402443 5031 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.402452 5031 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.428125 5031 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.505139 5031 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.694102 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e","Type":"ContainerDied","Data":"513137d849975dfa675886ecb05039893d724634940e736b9bb2eeb4863e48d6"} Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.694151 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="513137d849975dfa675886ecb05039893d724634940e736b9bb2eeb4863e48d6" Jan 29 09:34:43 crc kubenswrapper[5031]: I0129 09:34:43.694227 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.012540 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 09:34:48 crc kubenswrapper[5031]: E0129 09:34:48.013448 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="extract-content" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.013464 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="extract-content" Jan 29 09:34:48 crc kubenswrapper[5031]: E0129 09:34:48.013480 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" containerName="tempest-tests-tempest-tests-runner" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.013486 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" containerName="tempest-tests-tempest-tests-runner" Jan 29 09:34:48 crc kubenswrapper[5031]: E0129 09:34:48.013499 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="registry-server" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.013505 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="registry-server" Jan 29 09:34:48 crc kubenswrapper[5031]: E0129 09:34:48.013520 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="extract-utilities" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.013526 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="extract-utilities" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.013708 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4211a75e-1b12-403b-8337-871edbda8eef" containerName="registry-server" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.013734 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e" containerName="tempest-tests-tempest-tests-runner" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.014432 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.018209 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bhhzs" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.025328 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.206730 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a5239287-c272-4c5b-b72b-c6fd55567ae8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.206916 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs5w2\" (UniqueName: \"kubernetes.io/projected/a5239287-c272-4c5b-b72b-c6fd55567ae8-kube-api-access-fs5w2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a5239287-c272-4c5b-b72b-c6fd55567ae8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.309152 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a5239287-c272-4c5b-b72b-c6fd55567ae8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.309539 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs5w2\" (UniqueName: \"kubernetes.io/projected/a5239287-c272-4c5b-b72b-c6fd55567ae8-kube-api-access-fs5w2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a5239287-c272-4c5b-b72b-c6fd55567ae8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.310258 5031 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a5239287-c272-4c5b-b72b-c6fd55567ae8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.334210 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs5w2\" (UniqueName: \"kubernetes.io/projected/a5239287-c272-4c5b-b72b-c6fd55567ae8-kube-api-access-fs5w2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a5239287-c272-4c5b-b72b-c6fd55567ae8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.341996 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a5239287-c272-4c5b-b72b-c6fd55567ae8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.369613 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:34:48 crc kubenswrapper[5031]: I0129 09:34:48.831047 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 09:34:48 crc kubenswrapper[5031]: W0129 09:34:48.833276 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5239287_c272_4c5b_b72b_c6fd55567ae8.slice/crio-fff289cb08fabe80043d8c0e71132f13a1abd1927f6e0b110691ef999532e405 WatchSource:0}: Error finding container fff289cb08fabe80043d8c0e71132f13a1abd1927f6e0b110691ef999532e405: Status 404 returned error can't find the container with id fff289cb08fabe80043d8c0e71132f13a1abd1927f6e0b110691ef999532e405 Jan 29 09:34:49 crc kubenswrapper[5031]: I0129 09:34:49.774247 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a5239287-c272-4c5b-b72b-c6fd55567ae8","Type":"ContainerStarted","Data":"fff289cb08fabe80043d8c0e71132f13a1abd1927f6e0b110691ef999532e405"} Jan 29 09:34:50 crc kubenswrapper[5031]: I0129 09:34:50.783486 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a5239287-c272-4c5b-b72b-c6fd55567ae8","Type":"ContainerStarted","Data":"a7b847bdb33c37dd4ca19a16c96101be37b523b3b24f203d577d4466d20255fc"} Jan 29 09:34:50 crc kubenswrapper[5031]: I0129 09:34:50.804150 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.567856256 podStartE2EDuration="3.804128143s" podCreationTimestamp="2026-01-29 09:34:47 +0000 UTC" firstStartedPulling="2026-01-29 09:34:48.835257062 +0000 UTC m=+3369.334845014" lastFinishedPulling="2026-01-29 09:34:50.071528949 +0000 UTC m=+3370.571116901" observedRunningTime="2026-01-29 09:34:50.795257296 +0000 UTC m=+3371.294845258" watchObservedRunningTime="2026-01-29 09:34:50.804128143 +0000 UTC m=+3371.303716095" Jan 29 09:35:08 crc kubenswrapper[5031]: I0129 09:35:08.493590 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:35:08 crc kubenswrapper[5031]: I0129 09:35:08.495189 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:35:10 crc kubenswrapper[5031]: I0129 09:35:10.219133 5031 scope.go:117] "RemoveContainer" containerID="a6b1402009ec38a38a6efa324f59d9049bc0cf20abba4fb742c0d4ffe7aeee18" Jan 29 09:35:10 crc kubenswrapper[5031]: I0129 09:35:10.245029 5031 scope.go:117] "RemoveContainer" containerID="6793370f884f3e40faa40bc10c3efd66ef06d1804df5aef26709616fed73e3bf" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.652773 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v86rb/must-gather-mmfsw"] Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.654758 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.659160 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v86rb"/"default-dockercfg-2bkgc" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.659256 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v86rb"/"openshift-service-ca.crt" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.659160 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v86rb"/"kube-root-ca.crt" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.695328 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v86rb/must-gather-mmfsw"] Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.761810 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25282\" (UniqueName: \"kubernetes.io/projected/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-kube-api-access-25282\") pod \"must-gather-mmfsw\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.761924 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-must-gather-output\") pod \"must-gather-mmfsw\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.863628 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25282\" (UniqueName: \"kubernetes.io/projected/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-kube-api-access-25282\") pod \"must-gather-mmfsw\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.863722 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-must-gather-output\") pod \"must-gather-mmfsw\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.864527 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-must-gather-output\") pod \"must-gather-mmfsw\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.895096 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25282\" (UniqueName: \"kubernetes.io/projected/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-kube-api-access-25282\") pod \"must-gather-mmfsw\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:13 crc kubenswrapper[5031]: I0129 09:35:13.982174 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:35:14 crc kubenswrapper[5031]: I0129 09:35:14.445939 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v86rb/must-gather-mmfsw"] Jan 29 09:35:15 crc kubenswrapper[5031]: I0129 09:35:15.005529 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/must-gather-mmfsw" event={"ID":"586d3fab-ba5c-42ed-8ff8-4052ec209fa9","Type":"ContainerStarted","Data":"6cebbc41ca52da51dde9b8b72872351ca23b0be51c7e6714755fdbf513744e6e"} Jan 29 09:35:22 crc kubenswrapper[5031]: I0129 09:35:22.084169 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/must-gather-mmfsw" event={"ID":"586d3fab-ba5c-42ed-8ff8-4052ec209fa9","Type":"ContainerStarted","Data":"60eaa7b692c9ea49b1a4fb35dc21c0f3de7cb4ace9f7971095a9aa22cddae5af"} Jan 29 09:35:22 crc kubenswrapper[5031]: I0129 09:35:22.084748 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/must-gather-mmfsw" event={"ID":"586d3fab-ba5c-42ed-8ff8-4052ec209fa9","Type":"ContainerStarted","Data":"4de47a3471731543ccff8cb637efdf4a3b065581db469ee5d79768652f9c3f3b"} Jan 29 09:35:22 crc kubenswrapper[5031]: I0129 09:35:22.107680 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v86rb/must-gather-mmfsw" podStartSLOduration=2.286602343 podStartE2EDuration="9.10765763s" podCreationTimestamp="2026-01-29 09:35:13 +0000 UTC" firstStartedPulling="2026-01-29 09:35:14.448884746 +0000 UTC m=+3394.948472738" lastFinishedPulling="2026-01-29 09:35:21.269940063 +0000 UTC m=+3401.769528025" observedRunningTime="2026-01-29 09:35:22.103326334 +0000 UTC m=+3402.602914296" watchObservedRunningTime="2026-01-29 09:35:22.10765763 +0000 UTC m=+3402.607245592" Jan 29 09:35:25 crc kubenswrapper[5031]: I0129 09:35:25.767818 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v86rb/crc-debug-bx75x"] Jan 29 09:35:25 crc kubenswrapper[5031]: I0129 09:35:25.769535 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:25 crc kubenswrapper[5031]: I0129 09:35:25.879333 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5489\" (UniqueName: \"kubernetes.io/projected/6fbb96d4-a130-4a89-85d7-462a549bf3d7-kube-api-access-n5489\") pod \"crc-debug-bx75x\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:25 crc kubenswrapper[5031]: I0129 09:35:25.879507 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fbb96d4-a130-4a89-85d7-462a549bf3d7-host\") pod \"crc-debug-bx75x\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:25 crc kubenswrapper[5031]: I0129 09:35:25.981599 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5489\" (UniqueName: \"kubernetes.io/projected/6fbb96d4-a130-4a89-85d7-462a549bf3d7-kube-api-access-n5489\") pod \"crc-debug-bx75x\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:25 crc kubenswrapper[5031]: I0129 09:35:25.981740 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fbb96d4-a130-4a89-85d7-462a549bf3d7-host\") pod \"crc-debug-bx75x\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:25 crc kubenswrapper[5031]: I0129 09:35:25.981886 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fbb96d4-a130-4a89-85d7-462a549bf3d7-host\") pod \"crc-debug-bx75x\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:26 crc kubenswrapper[5031]: I0129 09:35:26.006567 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5489\" (UniqueName: \"kubernetes.io/projected/6fbb96d4-a130-4a89-85d7-462a549bf3d7-kube-api-access-n5489\") pod \"crc-debug-bx75x\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:26 crc kubenswrapper[5031]: I0129 09:35:26.101793 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:26 crc kubenswrapper[5031]: W0129 09:35:26.136563 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fbb96d4_a130_4a89_85d7_462a549bf3d7.slice/crio-f24d80091b9ec665cbc3d0c1b3f378dc0c292a0018aa5f1125ab8d203f359d57 WatchSource:0}: Error finding container f24d80091b9ec665cbc3d0c1b3f378dc0c292a0018aa5f1125ab8d203f359d57: Status 404 returned error can't find the container with id f24d80091b9ec665cbc3d0c1b3f378dc0c292a0018aa5f1125ab8d203f359d57 Jan 29 09:35:27 crc kubenswrapper[5031]: I0129 09:35:27.124564 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/crc-debug-bx75x" event={"ID":"6fbb96d4-a130-4a89-85d7-462a549bf3d7","Type":"ContainerStarted","Data":"f24d80091b9ec665cbc3d0c1b3f378dc0c292a0018aa5f1125ab8d203f359d57"} Jan 29 09:35:38 crc kubenswrapper[5031]: I0129 09:35:38.229144 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/crc-debug-bx75x" event={"ID":"6fbb96d4-a130-4a89-85d7-462a549bf3d7","Type":"ContainerStarted","Data":"04c39772187722beaed87578d79a0d405c1729411b28b32c6459faa8ef86e95d"} Jan 29 09:35:38 crc kubenswrapper[5031]: I0129 09:35:38.251066 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v86rb/crc-debug-bx75x" podStartSLOduration=1.792341798 podStartE2EDuration="13.251040226s" podCreationTimestamp="2026-01-29 09:35:25 +0000 UTC" firstStartedPulling="2026-01-29 09:35:26.138343095 +0000 UTC m=+3406.637931067" lastFinishedPulling="2026-01-29 09:35:37.597041543 +0000 UTC m=+3418.096629495" observedRunningTime="2026-01-29 09:35:38.247591093 +0000 UTC m=+3418.747179045" watchObservedRunningTime="2026-01-29 09:35:38.251040226 +0000 UTC m=+3418.750628178" Jan 29 09:35:38 crc kubenswrapper[5031]: I0129 09:35:38.494050 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:35:38 crc kubenswrapper[5031]: I0129 09:35:38.494340 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:35:57 crc kubenswrapper[5031]: E0129 09:35:57.867130 5031 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fbb96d4_a130_4a89_85d7_462a549bf3d7.slice/crio-conmon-04c39772187722beaed87578d79a0d405c1729411b28b32c6459faa8ef86e95d.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:35:58 crc kubenswrapper[5031]: I0129 09:35:58.415758 5031 generic.go:334] "Generic (PLEG): container finished" podID="6fbb96d4-a130-4a89-85d7-462a549bf3d7" containerID="04c39772187722beaed87578d79a0d405c1729411b28b32c6459faa8ef86e95d" exitCode=0 Jan 29 09:35:58 crc kubenswrapper[5031]: I0129 09:35:58.415855 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/crc-debug-bx75x" event={"ID":"6fbb96d4-a130-4a89-85d7-462a549bf3d7","Type":"ContainerDied","Data":"04c39772187722beaed87578d79a0d405c1729411b28b32c6459faa8ef86e95d"} Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.550751 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.585202 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v86rb/crc-debug-bx75x"] Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.595153 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v86rb/crc-debug-bx75x"] Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.605413 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fbb96d4-a130-4a89-85d7-462a549bf3d7-host\") pod \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.605533 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbb96d4-a130-4a89-85d7-462a549bf3d7-host" (OuterVolumeSpecName: "host") pod "6fbb96d4-a130-4a89-85d7-462a549bf3d7" (UID: "6fbb96d4-a130-4a89-85d7-462a549bf3d7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.605926 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5489\" (UniqueName: \"kubernetes.io/projected/6fbb96d4-a130-4a89-85d7-462a549bf3d7-kube-api-access-n5489\") pod \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\" (UID: \"6fbb96d4-a130-4a89-85d7-462a549bf3d7\") " Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.606598 5031 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fbb96d4-a130-4a89-85d7-462a549bf3d7-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.620508 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbb96d4-a130-4a89-85d7-462a549bf3d7-kube-api-access-n5489" (OuterVolumeSpecName: "kube-api-access-n5489") pod "6fbb96d4-a130-4a89-85d7-462a549bf3d7" (UID: "6fbb96d4-a130-4a89-85d7-462a549bf3d7"). InnerVolumeSpecName "kube-api-access-n5489". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:35:59 crc kubenswrapper[5031]: I0129 09:35:59.708232 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5489\" (UniqueName: \"kubernetes.io/projected/6fbb96d4-a130-4a89-85d7-462a549bf3d7-kube-api-access-n5489\") on node \"crc\" DevicePath \"\"" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.296401 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbb96d4-a130-4a89-85d7-462a549bf3d7" path="/var/lib/kubelet/pods/6fbb96d4-a130-4a89-85d7-462a549bf3d7/volumes" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.434581 5031 scope.go:117] "RemoveContainer" containerID="04c39772187722beaed87578d79a0d405c1729411b28b32c6459faa8ef86e95d" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.434627 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-bx75x" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.777630 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v86rb/crc-debug-hzdq5"] Jan 29 09:36:00 crc kubenswrapper[5031]: E0129 09:36:00.778393 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbb96d4-a130-4a89-85d7-462a549bf3d7" containerName="container-00" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.778407 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbb96d4-a130-4a89-85d7-462a549bf3d7" containerName="container-00" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.778614 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbb96d4-a130-4a89-85d7-462a549bf3d7" containerName="container-00" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.782052 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.831874 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-host\") pod \"crc-debug-hzdq5\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.831980 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ztl\" (UniqueName: \"kubernetes.io/projected/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-kube-api-access-84ztl\") pod \"crc-debug-hzdq5\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.933510 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-host\") pod \"crc-debug-hzdq5\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.933667 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ztl\" (UniqueName: \"kubernetes.io/projected/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-kube-api-access-84ztl\") pod \"crc-debug-hzdq5\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.934117 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-host\") pod \"crc-debug-hzdq5\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:00 crc kubenswrapper[5031]: I0129 09:36:00.950940 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ztl\" (UniqueName: \"kubernetes.io/projected/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-kube-api-access-84ztl\") pod \"crc-debug-hzdq5\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:01 crc kubenswrapper[5031]: I0129 09:36:01.098814 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:01 crc kubenswrapper[5031]: I0129 09:36:01.445900 5031 generic.go:334] "Generic (PLEG): container finished" podID="06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb" containerID="409d60477b79817aeb1f4b77f18ed4acf4c2262f1f076aade8723dd85520562c" exitCode=1 Jan 29 09:36:01 crc kubenswrapper[5031]: I0129 09:36:01.446008 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/crc-debug-hzdq5" event={"ID":"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb","Type":"ContainerDied","Data":"409d60477b79817aeb1f4b77f18ed4acf4c2262f1f076aade8723dd85520562c"} Jan 29 09:36:01 crc kubenswrapper[5031]: I0129 09:36:01.446343 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/crc-debug-hzdq5" event={"ID":"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb","Type":"ContainerStarted","Data":"cc8677675546a389fdceac9f6ef44dcfec5a676c6cb6b1f9bb0f11970f2f8479"} Jan 29 09:36:01 crc kubenswrapper[5031]: I0129 09:36:01.495808 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v86rb/crc-debug-hzdq5"] Jan 29 09:36:01 crc kubenswrapper[5031]: I0129 09:36:01.511454 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v86rb/crc-debug-hzdq5"] Jan 29 09:36:02 crc kubenswrapper[5031]: I0129 09:36:02.554854 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:02 crc kubenswrapper[5031]: I0129 09:36:02.678736 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-host\") pod \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " Jan 29 09:36:02 crc kubenswrapper[5031]: I0129 09:36:02.678955 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84ztl\" (UniqueName: \"kubernetes.io/projected/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-kube-api-access-84ztl\") pod \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\" (UID: \"06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb\") " Jan 29 09:36:02 crc kubenswrapper[5031]: I0129 09:36:02.679431 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-host" (OuterVolumeSpecName: "host") pod "06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb" (UID: "06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:36:02 crc kubenswrapper[5031]: I0129 09:36:02.695586 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-kube-api-access-84ztl" (OuterVolumeSpecName: "kube-api-access-84ztl") pod "06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb" (UID: "06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb"). InnerVolumeSpecName "kube-api-access-84ztl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:36:02 crc kubenswrapper[5031]: I0129 09:36:02.782783 5031 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:36:02 crc kubenswrapper[5031]: I0129 09:36:02.782840 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84ztl\" (UniqueName: \"kubernetes.io/projected/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb-kube-api-access-84ztl\") on node \"crc\" DevicePath \"\"" Jan 29 09:36:03 crc kubenswrapper[5031]: I0129 09:36:03.463582 5031 scope.go:117] "RemoveContainer" containerID="409d60477b79817aeb1f4b77f18ed4acf4c2262f1f076aade8723dd85520562c" Jan 29 09:36:03 crc kubenswrapper[5031]: I0129 09:36:03.463642 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/crc-debug-hzdq5" Jan 29 09:36:04 crc kubenswrapper[5031]: I0129 09:36:04.294126 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb" path="/var/lib/kubelet/pods/06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb/volumes" Jan 29 09:36:08 crc kubenswrapper[5031]: I0129 09:36:08.493942 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:36:08 crc kubenswrapper[5031]: I0129 09:36:08.494394 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:36:08 crc kubenswrapper[5031]: I0129 09:36:08.494440 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:36:08 crc kubenswrapper[5031]: I0129 09:36:08.495210 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7270cec15a957c2029d22962e4647ab60cfb192751d9117ef305ce5cc990f36"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:36:08 crc kubenswrapper[5031]: I0129 09:36:08.495260 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://a7270cec15a957c2029d22962e4647ab60cfb192751d9117ef305ce5cc990f36" gracePeriod=600 Jan 29 09:36:09 crc kubenswrapper[5031]: I0129 09:36:09.529038 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="a7270cec15a957c2029d22962e4647ab60cfb192751d9117ef305ce5cc990f36" exitCode=0 Jan 29 09:36:09 crc kubenswrapper[5031]: I0129 09:36:09.529086 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"a7270cec15a957c2029d22962e4647ab60cfb192751d9117ef305ce5cc990f36"} Jan 29 09:36:09 crc kubenswrapper[5031]: I0129 09:36:09.529726 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3"} Jan 29 09:36:09 crc kubenswrapper[5031]: I0129 09:36:09.529760 5031 scope.go:117] "RemoveContainer" containerID="d40fbe8e65e39cbd3ae7b1329356c29ead5fe857334f97e229a7fd35c851751b" Jan 29 09:36:10 crc kubenswrapper[5031]: I0129 09:36:10.298726 5031 scope.go:117] "RemoveContainer" containerID="a2f737e7909204e503088977e0fbc381b0949dbbcde395cd4cfa962088fa1366" Jan 29 09:36:10 crc kubenswrapper[5031]: I0129 09:36:10.318808 5031 scope.go:117] "RemoveContainer" containerID="2e32fa1359c13d0969696db27ec62a31f3c0af1897840cb4b6d5af323815d8a4" Jan 29 09:36:10 crc kubenswrapper[5031]: I0129 09:36:10.338115 5031 scope.go:117] "RemoveContainer" containerID="b7647b048e7eefd53be1d5d26d7f9d82df5bc6d98f529131ff8991fd2dc65d4d" Jan 29 09:36:10 crc kubenswrapper[5031]: I0129 09:36:10.358163 5031 scope.go:117] "RemoveContainer" containerID="d1173139539ef1ed3c5e36ec545d27139ff6ddafbf19b46c287357afa0c2fe9c" Jan 29 09:36:10 crc kubenswrapper[5031]: I0129 09:36:10.381782 5031 scope.go:117] "RemoveContainer" containerID="9ac22e104a84b3a5f265e5851d0123ca0b36600e3ee0d502b6982b6f242f7c07" Jan 29 09:36:10 crc kubenswrapper[5031]: I0129 09:36:10.575111 5031 scope.go:117] "RemoveContainer" containerID="bd32375e2ed9b43634c624a6a88d0d825608fecb06400aaea3031a510d1e9d18" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.767115 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mjfxm"] Jan 29 09:36:43 crc kubenswrapper[5031]: E0129 09:36:43.768928 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb" containerName="container-00" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.768963 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb" containerName="container-00" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.769159 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="06dcfbbf-6d5b-4e57-a1d2-f26dff3412bb" containerName="container-00" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.770446 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.836305 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjfxm"] Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.875678 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d80684b2-6d0e-4e75-a152-8b727d137289-utilities\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.876057 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndcl\" (UniqueName: \"kubernetes.io/projected/d80684b2-6d0e-4e75-a152-8b727d137289-kube-api-access-cndcl\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.876130 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d80684b2-6d0e-4e75-a152-8b727d137289-catalog-content\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.977348 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndcl\" (UniqueName: \"kubernetes.io/projected/d80684b2-6d0e-4e75-a152-8b727d137289-kube-api-access-cndcl\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.977426 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d80684b2-6d0e-4e75-a152-8b727d137289-catalog-content\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.977496 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d80684b2-6d0e-4e75-a152-8b727d137289-utilities\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.978302 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d80684b2-6d0e-4e75-a152-8b727d137289-utilities\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.978355 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d80684b2-6d0e-4e75-a152-8b727d137289-catalog-content\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:43 crc kubenswrapper[5031]: I0129 09:36:43.998218 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndcl\" (UniqueName: \"kubernetes.io/projected/d80684b2-6d0e-4e75-a152-8b727d137289-kube-api-access-cndcl\") pod \"certified-operators-mjfxm\" (UID: \"d80684b2-6d0e-4e75-a152-8b727d137289\") " pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:44 crc kubenswrapper[5031]: I0129 09:36:44.156275 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:36:44 crc kubenswrapper[5031]: I0129 09:36:44.670097 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjfxm"] Jan 29 09:36:44 crc kubenswrapper[5031]: I0129 09:36:44.895664 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjfxm" event={"ID":"d80684b2-6d0e-4e75-a152-8b727d137289","Type":"ContainerStarted","Data":"687fe0598a720225d7bdae7fa7411b8274f9d5087b82c4397140276c50d20241"} Jan 29 09:36:44 crc kubenswrapper[5031]: I0129 09:36:44.895710 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjfxm" event={"ID":"d80684b2-6d0e-4e75-a152-8b727d137289","Type":"ContainerStarted","Data":"3b450c76e908787db608ea8b6242fd89e563cd017585a38ddb0834683132af7e"} Jan 29 09:36:45 crc kubenswrapper[5031]: I0129 09:36:45.924162 5031 generic.go:334] "Generic (PLEG): container finished" podID="d80684b2-6d0e-4e75-a152-8b727d137289" containerID="687fe0598a720225d7bdae7fa7411b8274f9d5087b82c4397140276c50d20241" exitCode=0 Jan 29 09:36:45 crc kubenswrapper[5031]: I0129 09:36:45.924229 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjfxm" event={"ID":"d80684b2-6d0e-4e75-a152-8b727d137289","Type":"ContainerDied","Data":"687fe0598a720225d7bdae7fa7411b8274f9d5087b82c4397140276c50d20241"} Jan 29 09:36:45 crc kubenswrapper[5031]: I0129 09:36:45.928626 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:36:52 crc kubenswrapper[5031]: I0129 09:36:52.989885 5031 generic.go:334] "Generic (PLEG): container finished" podID="d80684b2-6d0e-4e75-a152-8b727d137289" containerID="f386bb0a7b9de58d573b9a59dcce219fbd2e305a054f6d31bdfa894d0d5ca33b" exitCode=0 Jan 29 09:36:52 crc kubenswrapper[5031]: I0129 09:36:52.989990 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjfxm" event={"ID":"d80684b2-6d0e-4e75-a152-8b727d137289","Type":"ContainerDied","Data":"f386bb0a7b9de58d573b9a59dcce219fbd2e305a054f6d31bdfa894d0d5ca33b"} Jan 29 09:36:55 crc kubenswrapper[5031]: I0129 09:36:55.015671 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjfxm" event={"ID":"d80684b2-6d0e-4e75-a152-8b727d137289","Type":"ContainerStarted","Data":"16f82e559ea2e7d02e489e4ecc3cf324bb0a0c017acc051212637019e01d9de6"} Jan 29 09:36:55 crc kubenswrapper[5031]: I0129 09:36:55.044217 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mjfxm" podStartSLOduration=3.636124368 podStartE2EDuration="12.044195074s" podCreationTimestamp="2026-01-29 09:36:43 +0000 UTC" firstStartedPulling="2026-01-29 09:36:45.928351348 +0000 UTC m=+3486.427939300" lastFinishedPulling="2026-01-29 09:36:54.336422054 +0000 UTC m=+3494.836010006" observedRunningTime="2026-01-29 09:36:55.03772226 +0000 UTC m=+3495.537310212" watchObservedRunningTime="2026-01-29 09:36:55.044195074 +0000 UTC m=+3495.543783026" Jan 29 09:37:02 crc kubenswrapper[5031]: I0129 09:37:02.904761 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f47855b9d-vl7rl_f5d945c8-336c-4683-8e04-2dd0de48b0ee/barbican-api/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.082448 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f47855b9d-vl7rl_f5d945c8-336c-4683-8e04-2dd0de48b0ee/barbican-api-log/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.153436 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-685b68c5cb-gfkqk_74ae4456-e53d-410e-931c-108d9b79177f/barbican-keystone-listener/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.273153 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-685b68c5cb-gfkqk_74ae4456-e53d-410e-931c-108d9b79177f/barbican-keystone-listener-log/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.341676 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86875b9f7-r8mj8_2769fca4-758e-4f92-a514-a70ca7cb0b5a/barbican-worker/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.356210 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86875b9f7-r8mj8_2769fca4-758e-4f92-a514-a70ca7cb0b5a/barbican-worker-log/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.531446 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd_91b928d8-c43f-4fa6-b673-62b42f2c88a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.625644 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/ceilometer-central-agent/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.719496 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/ceilometer-notification-agent/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.750904 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/proxy-httpd/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.843641 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/sg-core/0.log" Jan 29 09:37:03 crc kubenswrapper[5031]: I0129 09:37:03.933105 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld_95c8c7b7-5003-4dae-b405-74dc2263762c/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.081674 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v_fc3178c8-27cc-4f8e-a913-6eae9c84da49/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.157852 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.157888 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.223839 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.230053 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2c053401-8bfa-4629-926e-e97653fbb397/cinder-api/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.240201 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2c053401-8bfa-4629-926e-e97653fbb397/cinder-api-log/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.454592 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ec4354fa-4aef-4401-befd-f3a59619869e/probe/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.567504 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ce55669-5a60-4cbb-8994-441b7c5d0c75/cinder-scheduler/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.659782 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ec4354fa-4aef-4401-befd-f3a59619869e/cinder-backup/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.682221 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ce55669-5a60-4cbb-8994-441b7c5d0c75/probe/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.863480 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1fae57c0-f6a0-4239-b513-e37aec4f4065/probe/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.908143 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1fae57c0-f6a0-4239-b513-e37aec4f4065/cinder-volume/0.log" Jan 29 09:37:04 crc kubenswrapper[5031]: I0129 09:37:04.955292 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-h9b65_c9397ed4-a4ea-45be-9115-657795050184/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.161217 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ptpjh_3b0d7949-564d-4b3d-84f8-038fc952a24f/init/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.172303 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7_1c21c7ac-919e-43f0-92b2-0cf64df94743/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.183218 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mjfxm" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.277512 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjfxm"] Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.369217 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c5v8"] Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.369552 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8c5v8" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="registry-server" containerID="cri-o://a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31" gracePeriod=2 Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.483698 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e631cdf5-7a95-457f-95ac-8632231e0cd7/glance-httpd/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.546382 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ptpjh_3b0d7949-564d-4b3d-84f8-038fc952a24f/init/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.569613 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ptpjh_3b0d7949-564d-4b3d-84f8-038fc952a24f/dnsmasq-dns/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.801557 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4e136d48-7be7-4b0f-a45c-da6b3d218b8d/glance-httpd/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.871070 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.877915 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e631cdf5-7a95-457f-95ac-8632231e0cd7/glance-log/0.log" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.969082 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-utilities\") pod \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.969292 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-kube-api-access-nsf24\") pod \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.969470 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-catalog-content\") pod \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\" (UID: \"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1\") " Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.972987 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-utilities" (OuterVolumeSpecName: "utilities") pod "5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" (UID: "5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.978402 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-kube-api-access-nsf24" (OuterVolumeSpecName: "kube-api-access-nsf24") pod "5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" (UID: "5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1"). InnerVolumeSpecName "kube-api-access-nsf24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:37:05 crc kubenswrapper[5031]: I0129 09:37:05.994656 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4e136d48-7be7-4b0f-a45c-da6b3d218b8d/glance-log/0.log" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.075743 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.075780 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-kube-api-access-nsf24\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.095594 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" (UID: "5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.136243 5031 generic.go:334] "Generic (PLEG): container finished" podID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerID="a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31" exitCode=0 Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.136350 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c5v8" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.136425 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c5v8" event={"ID":"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1","Type":"ContainerDied","Data":"a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31"} Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.136464 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c5v8" event={"ID":"5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1","Type":"ContainerDied","Data":"3c9ab45798d81a081b3ee629dbe6397f169f981aa0be5e4380b49b3c1ea54450"} Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.136482 5031 scope.go:117] "RemoveContainer" containerID="a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.155929 5031 scope.go:117] "RemoveContainer" containerID="c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.175442 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c5v8"] Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.178808 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.184977 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8c5v8"] Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.201550 5031 scope.go:117] "RemoveContainer" containerID="0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.241798 5031 scope.go:117] "RemoveContainer" containerID="a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31" Jan 29 09:37:06 crc kubenswrapper[5031]: E0129 09:37:06.242284 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31\": container with ID starting with a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31 not found: ID does not exist" containerID="a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.242330 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31"} err="failed to get container status \"a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31\": rpc error: code = NotFound desc = could not find container \"a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31\": container with ID starting with a9b0eb71d8416afc17d8eeadf5ad3fa74ff449f8c32fdf313d1e7968fced6f31 not found: ID does not exist" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.242354 5031 scope.go:117] "RemoveContainer" containerID="c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a" Jan 29 09:37:06 crc kubenswrapper[5031]: E0129 09:37:06.242806 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a\": container with ID starting with c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a not found: ID does not exist" containerID="c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.242865 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a"} err="failed to get container status \"c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a\": rpc error: code = NotFound desc = could not find container \"c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a\": container with ID starting with c32be89ef2869e1913d7caa2b63a6c2a1d8e255e747a472f71ca82addae8dc1a not found: ID does not exist" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.242898 5031 scope.go:117] "RemoveContainer" containerID="0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0" Jan 29 09:37:06 crc kubenswrapper[5031]: E0129 09:37:06.243213 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0\": container with ID starting with 0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0 not found: ID does not exist" containerID="0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.243239 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0"} err="failed to get container status \"0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0\": rpc error: code = NotFound desc = could not find container \"0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0\": container with ID starting with 0156bb7c4d820e7676f16a9df29e25296971ab978aed9ed6113a07a4357f1ad0 not found: ID does not exist" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.293765 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" path="/var/lib/kubelet/pods/5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1/volumes" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.331960 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b47759886-4vh7j_7cfc507f-5595-4ff5-9f5f-8942dc5468dc/horizon-log/0.log" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.353754 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b47759886-4vh7j_7cfc507f-5595-4ff5-9f5f-8942dc5468dc/horizon/0.log" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.756389 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-tc282_83ca1366-5060-4771-ae03-b06595c0d5fb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:06 crc kubenswrapper[5031]: I0129 09:37:06.789946 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4_49194734-e76b-4b96-bf9c-a4a73782e04b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.212729 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6b6fcb467b-dc5s8_11cb22e9-f3f2-4a42-804c-aaa47ca31a16/keystone-api/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.401828 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494621-vw7kq_d3ee4f52-58c1-4e47-b074-1f2a379b5eb2/keystone-cron/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.453940 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d/kube-state-metrics/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.609002 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-7z526_4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.659808 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2ce35ae9-25db-409d-af6b-0f5d94e61ea7/manila-api-log/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.737895 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2ce35ae9-25db-409d-af6b-0f5d94e61ea7/manila-api/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.817159 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-ba01-account-create-update-j9tfj_1c135329-1c87-495b-affc-91c0520b26ba/mariadb-account-create-update/0.log" Jan 29 09:37:07 crc kubenswrapper[5031]: I0129 09:37:07.927932 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-create-2knmh_6022f9c4-3a0d-4f89-881d-b6a17970ac9b/mariadb-database-create/0.log" Jan 29 09:37:08 crc kubenswrapper[5031]: I0129 09:37:08.145938 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-sync-fmrct_73da3d2b-eb56-4382-9091-6d353d461127/manila-db-sync/0.log" Jan 29 09:37:08 crc kubenswrapper[5031]: I0129 09:37:08.267936 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4320ea6-3657-454b-b535-3776f405d823/probe/0.log" Jan 29 09:37:08 crc kubenswrapper[5031]: I0129 09:37:08.418398 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4320ea6-3657-454b-b535-3776f405d823/manila-scheduler/0.log" Jan 29 09:37:08 crc kubenswrapper[5031]: I0129 09:37:08.502037 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1ae94363-9689-48ed-8c8d-c1668fb5955a/manila-share/0.log" Jan 29 09:37:08 crc kubenswrapper[5031]: I0129 09:37:08.623883 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1ae94363-9689-48ed-8c8d-c1668fb5955a/probe/0.log" Jan 29 09:37:08 crc kubenswrapper[5031]: I0129 09:37:08.771817 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-558dccb5cc-bkkrn_8b30d63e-6219-4832-868b-9a115b30f433/neutron-api/0.log" Jan 29 09:37:08 crc kubenswrapper[5031]: I0129 09:37:08.822715 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-558dccb5cc-bkkrn_8b30d63e-6219-4832-868b-9a115b30f433/neutron-httpd/0.log" Jan 29 09:37:09 crc kubenswrapper[5031]: I0129 09:37:09.064074 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2_5e820097-42d1-47ac-84d1-824842f92b8b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:09 crc kubenswrapper[5031]: I0129 09:37:09.400758 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_92d916f0-bb3a-45de-b176-616bd8a170e4/nova-api-log/0.log" Jan 29 09:37:09 crc kubenswrapper[5031]: I0129 09:37:09.435862 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_49fa8048-1d04-42bc-8e37-b6b40e7e5ece/nova-cell0-conductor-conductor/0.log" Jan 29 09:37:09 crc kubenswrapper[5031]: I0129 09:37:09.438785 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_92d916f0-bb3a-45de-b176-616bd8a170e4/nova-api-api/0.log" Jan 29 09:37:09 crc kubenswrapper[5031]: I0129 09:37:09.749496 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_fd945c64-5938-4cc6-9eb5-17e013e36aba/nova-cell1-conductor-conductor/0.log" Jan 29 09:37:09 crc kubenswrapper[5031]: I0129 09:37:09.812934 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_15c7d35a-0f80-4823-8d8d-371e1f76f869/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 09:37:09 crc kubenswrapper[5031]: I0129 09:37:09.918768 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts_05fc07ec-828a-468d-be87-1fe3925dfb0c/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:10 crc kubenswrapper[5031]: I0129 09:37:10.345868 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_97911fdf-2136-4700-8474-d165d6de4c33/nova-metadata-log/0.log" Jan 29 09:37:10 crc kubenswrapper[5031]: I0129 09:37:10.624684 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7149ef7-171a-48eb-a13a-af1982b4fbb1/mysql-bootstrap/0.log" Jan 29 09:37:10 crc kubenswrapper[5031]: I0129 09:37:10.628899 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_57931a94-e323-4a04-915d-735dc7a09030/nova-scheduler-scheduler/0.log" Jan 29 09:37:10 crc kubenswrapper[5031]: I0129 09:37:10.844421 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7149ef7-171a-48eb-a13a-af1982b4fbb1/galera/0.log" Jan 29 09:37:10 crc kubenswrapper[5031]: I0129 09:37:10.868412 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7149ef7-171a-48eb-a13a-af1982b4fbb1/mysql-bootstrap/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.057188 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_33700928-aca8-42c5-83f7-a57572d399aa/mysql-bootstrap/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.238208 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_33700928-aca8-42c5-83f7-a57572d399aa/mysql-bootstrap/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.260484 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_33700928-aca8-42c5-83f7-a57572d399aa/galera/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.387287 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_97911fdf-2136-4700-8474-d165d6de4c33/nova-metadata-metadata/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.407832 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7cd1d91b-5c5a-425c-bb48-ed97702719d6/openstackclient/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.550137 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-khdxz_8e57b4c5-5c87-4720-9586-c4e7a8cf763f/openstack-network-exporter/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.590875 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovsdb-server-init/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.842007 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovs-vswitchd/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.866866 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovsdb-server/0.log" Jan 29 09:37:11 crc kubenswrapper[5031]: I0129 09:37:11.874033 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovsdb-server-init/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.090533 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-z6mp7_b34fd049-3d7e-4d5d-acfc-8e4c450bf857/ovn-controller/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.145229 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kdq49_764d97ce-43f8-4cce-9b06-61f1a548199f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.312396 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2f3941fd-64d1-4652-83b1-e89d547e4df5/openstack-network-exporter/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.410739 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2f3941fd-64d1-4652-83b1-e89d547e4df5/ovn-northd/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.511076 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_11c52100-0b09-4377-b50e-84c78d3ddf74/openstack-network-exporter/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.579343 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_11c52100-0b09-4377-b50e-84c78d3ddf74/ovsdbserver-nb/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.700151 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0ad1ce96-1373-407b-b4ec-700934ef6ac4/openstack-network-exporter/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.763080 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0ad1ce96-1373-407b-b4ec-700934ef6ac4/ovsdbserver-sb/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.968071 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c4fdc6744-xx4wj_e009c8bd-2d71-405b-a166-53cf1451c8f0/placement-api/0.log" Jan 29 09:37:12 crc kubenswrapper[5031]: I0129 09:37:12.978471 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c4fdc6744-xx4wj_e009c8bd-2d71-405b-a166-53cf1451c8f0/placement-log/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.057814 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3af83c61-d4e1-4694-a820-1bb5529a2bce/setup-container/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.276339 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3af83c61-d4e1-4694-a820-1bb5529a2bce/setup-container/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.300728 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3af83c61-d4e1-4694-a820-1bb5529a2bce/rabbitmq/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.307495 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73/setup-container/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.578260 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73/setup-container/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.605587 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77_5a33f933-f687-47f9-868b-02c0a633ab0f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.615569 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73/rabbitmq/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.777608 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb_b62042d2-d6ae-42b6-abaa-b08bdb66257d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:13 crc kubenswrapper[5031]: I0129 09:37:13.875999 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7pppc_7a27e64c-0c6a-497f-bdae-50302a72b898/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:14 crc kubenswrapper[5031]: I0129 09:37:14.106935 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6t4cp_8c91cd46-761e-4015-a2ea-90647c5a7be5/ssh-known-hosts-edpm-deployment/0.log" Jan 29 09:37:14 crc kubenswrapper[5031]: I0129 09:37:14.116513 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e/tempest-tests-tempest-tests-runner/0.log" Jan 29 09:37:14 crc kubenswrapper[5031]: I0129 09:37:14.366623 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a5239287-c272-4c5b-b72b-c6fd55567ae8/test-operator-logs-container/0.log" Jan 29 09:37:14 crc kubenswrapper[5031]: I0129 09:37:14.420754 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8_71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:37:35 crc kubenswrapper[5031]: I0129 09:37:35.860499 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7411c3e7-5370-4bc2-85b8-aa1a137d948b/memcached/0.log" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.797976 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vl9lt"] Jan 29 09:37:42 crc kubenswrapper[5031]: E0129 09:37:42.798740 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="extract-utilities" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.798774 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="extract-utilities" Jan 29 09:37:42 crc kubenswrapper[5031]: E0129 09:37:42.798802 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="extract-content" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.798808 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="extract-content" Jan 29 09:37:42 crc kubenswrapper[5031]: E0129 09:37:42.798834 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="registry-server" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.798840 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="registry-server" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.799002 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd56b5c-ccf1-4132-a5fe-c5d6ed8068e1" containerName="registry-server" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.800254 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.839863 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl9lt"] Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.963454 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km9h8\" (UniqueName: \"kubernetes.io/projected/e6afbb14-cb63-478b-b8f8-a979a71e3466-kube-api-access-km9h8\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.963712 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-catalog-content\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:42 crc kubenswrapper[5031]: I0129 09:37:42.963821 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-utilities\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.065691 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km9h8\" (UniqueName: \"kubernetes.io/projected/e6afbb14-cb63-478b-b8f8-a979a71e3466-kube-api-access-km9h8\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.065753 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-catalog-content\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.065791 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-utilities\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.066331 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-utilities\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.066428 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-catalog-content\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.234296 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km9h8\" (UniqueName: \"kubernetes.io/projected/e6afbb14-cb63-478b-b8f8-a979a71e3466-kube-api-access-km9h8\") pod \"community-operators-vl9lt\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.239570 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/util/0.log" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.462023 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.565140 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/util/0.log" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.578997 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/pull/0.log" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.640850 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/pull/0.log" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.885386 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/extract/0.log" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.887685 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/util/0.log" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.917163 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/pull/0.log" Jan 29 09:37:43 crc kubenswrapper[5031]: I0129 09:37:43.962410 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl9lt"] Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.199802 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6bc7f4f4cf-6pqwq_9d7a2eca-248d-464e-b698-5f4daee374d3/manager/0.log" Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.226830 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-f6487bd57-mppwm_a1850026-d710-4da7-883b-1b7149900523/manager/0.log" Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.341426 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66dfbd6f5d-f5hc7_59d726a8-dfae-47c6-a479-682b32601f3b/manager/0.log" Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.483296 5031 generic.go:334] "Generic (PLEG): container finished" podID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerID="875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565" exitCode=0 Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.483353 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl9lt" event={"ID":"e6afbb14-cb63-478b-b8f8-a979a71e3466","Type":"ContainerDied","Data":"875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565"} Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.483409 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl9lt" event={"ID":"e6afbb14-cb63-478b-b8f8-a979a71e3466","Type":"ContainerStarted","Data":"a8d94f0bd13d0714cf75439f83b18a3fe40aec78055d73c7eaeb23049ea66d6d"} Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.489670 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7857f788f-x5hq5_6b581b93-53b8-4bda-a3bc-7ab837f7aec3/manager/0.log" Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.550216 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-587c6bfdcf-tt4jw_fef04ed6-9416-4599-a960-cde56635da29/manager/0.log" Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.729125 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-ftmh8_911c19b6-72d1-4363-bae0-02bb5290a0c3/manager/0.log" Jan 29 09:37:44 crc kubenswrapper[5031]: I0129 09:37:44.997311 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-8dpt8_5b5b3ff2-7c9d-412e-8eef-a203c3096694/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.027703 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-958664b5-tpj2j_7771acfe-a081-49f6-afa7-79c7436486b4/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.189628 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6978b79747-zhkh2_8a42f832-5088-4110-a8a9-cc3203ea4677/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.272849 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-765668569f-9nxrk_3828c08a-7f8d-4d56-8aad-9fb6a7ce294a/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.491874 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl9lt" event={"ID":"e6afbb14-cb63-478b-b8f8-a979a71e3466","Type":"ContainerStarted","Data":"b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0"} Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.499756 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-r6hlv_b0b4b733-caa0-46a2-854a-0a96d676fe86/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.583053 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-694c5bfc85-ltbs2_4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.763699 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-hhbpv_b7af41a8-c82f-4e03-b775-ad36d931b8c5/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.831704 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-b6c99d9c5-pppjk_652f139c-6f12-42e1-88e8-fef00b383015/manager/0.log" Jan 29 09:37:45 crc kubenswrapper[5031]: I0129 09:37:45.944507 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp_5925efab-b140-47f9-9b05-309973965161/manager/0.log" Jan 29 09:37:46 crc kubenswrapper[5031]: I0129 09:37:46.153921 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-694c86d6f5-8tvx7_9d3b6973-ffdd-445f-b03f-3783ff2c3159/operator/0.log" Jan 29 09:37:46 crc kubenswrapper[5031]: I0129 09:37:46.418488 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-znw6z_d18ce80b-f96c-41a4-80b5-fe959665c78a/registry-server/0.log" Jan 29 09:37:46 crc kubenswrapper[5031]: I0129 09:37:46.506434 5031 generic.go:334] "Generic (PLEG): container finished" podID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerID="b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0" exitCode=0 Jan 29 09:37:46 crc kubenswrapper[5031]: I0129 09:37:46.506486 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl9lt" event={"ID":"e6afbb14-cb63-478b-b8f8-a979a71e3466","Type":"ContainerDied","Data":"b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0"} Jan 29 09:37:46 crc kubenswrapper[5031]: I0129 09:37:46.690828 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-fn2tc_6046088f-7960-4675-a8a6-06eb441cea9f/manager/0.log" Jan 29 09:37:46 crc kubenswrapper[5031]: I0129 09:37:46.807780 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-6hd46_b8416e4f-a2ee-46c8-90ff-2ed68301825e/manager/0.log" Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.015623 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rwmm7_c3b8b573-36e5-48c9-bfb5-adff7608c393/operator/0.log" Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.286300 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-46js4_3fb6584b-e21d-4c41-af40-6099ceda26fe/manager/0.log" Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.414203 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-684f4d697d-h5vhw_f2eaf23b-b589-4c35-bb14-28a1aa1d9099/manager/0.log" Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.531841 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl9lt" event={"ID":"e6afbb14-cb63-478b-b8f8-a979a71e3466","Type":"ContainerStarted","Data":"2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3"} Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.556630 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vl9lt" podStartSLOduration=3.028370838 podStartE2EDuration="5.556605429s" podCreationTimestamp="2026-01-29 09:37:42 +0000 UTC" firstStartedPulling="2026-01-29 09:37:44.485383745 +0000 UTC m=+3544.984971697" lastFinishedPulling="2026-01-29 09:37:47.013618336 +0000 UTC m=+3547.513206288" observedRunningTime="2026-01-29 09:37:47.549784076 +0000 UTC m=+3548.049372038" watchObservedRunningTime="2026-01-29 09:37:47.556605429 +0000 UTC m=+3548.056193381" Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.601284 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-tgkd9_418034d3-f759-4efa-930f-c66f10db0fe2/manager/0.log" Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.786076 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-vt2wm_4e1db845-0d5b-489a-b3bf-a2921dc81cdb/manager/0.log" Jan 29 09:37:47 crc kubenswrapper[5031]: I0129 09:37:47.826503 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7fd9db8655-wjbcx_bacd8bd3-412c-435e-b71d-e43f39daba5d/manager/0.log" Jan 29 09:37:53 crc kubenswrapper[5031]: I0129 09:37:53.462678 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:53 crc kubenswrapper[5031]: I0129 09:37:53.463275 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:53 crc kubenswrapper[5031]: I0129 09:37:53.558467 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:53 crc kubenswrapper[5031]: I0129 09:37:53.625135 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:53 crc kubenswrapper[5031]: I0129 09:37:53.803119 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl9lt"] Jan 29 09:37:55 crc kubenswrapper[5031]: I0129 09:37:55.591643 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vl9lt" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="registry-server" containerID="cri-o://2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3" gracePeriod=2 Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.146916 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.236878 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-utilities\") pod \"e6afbb14-cb63-478b-b8f8-a979a71e3466\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.236930 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-catalog-content\") pod \"e6afbb14-cb63-478b-b8f8-a979a71e3466\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.236993 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km9h8\" (UniqueName: \"kubernetes.io/projected/e6afbb14-cb63-478b-b8f8-a979a71e3466-kube-api-access-km9h8\") pod \"e6afbb14-cb63-478b-b8f8-a979a71e3466\" (UID: \"e6afbb14-cb63-478b-b8f8-a979a71e3466\") " Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.238460 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-utilities" (OuterVolumeSpecName: "utilities") pod "e6afbb14-cb63-478b-b8f8-a979a71e3466" (UID: "e6afbb14-cb63-478b-b8f8-a979a71e3466"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.243941 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6afbb14-cb63-478b-b8f8-a979a71e3466-kube-api-access-km9h8" (OuterVolumeSpecName: "kube-api-access-km9h8") pod "e6afbb14-cb63-478b-b8f8-a979a71e3466" (UID: "e6afbb14-cb63-478b-b8f8-a979a71e3466"). InnerVolumeSpecName "kube-api-access-km9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.315132 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6afbb14-cb63-478b-b8f8-a979a71e3466" (UID: "e6afbb14-cb63-478b-b8f8-a979a71e3466"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.339710 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km9h8\" (UniqueName: \"kubernetes.io/projected/e6afbb14-cb63-478b-b8f8-a979a71e3466-kube-api-access-km9h8\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.339754 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.339768 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6afbb14-cb63-478b-b8f8-a979a71e3466-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.604193 5031 generic.go:334] "Generic (PLEG): container finished" podID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerID="2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3" exitCode=0 Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.604278 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl9lt" event={"ID":"e6afbb14-cb63-478b-b8f8-a979a71e3466","Type":"ContainerDied","Data":"2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3"} Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.604339 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl9lt" event={"ID":"e6afbb14-cb63-478b-b8f8-a979a71e3466","Type":"ContainerDied","Data":"a8d94f0bd13d0714cf75439f83b18a3fe40aec78055d73c7eaeb23049ea66d6d"} Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.604334 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl9lt" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.604387 5031 scope.go:117] "RemoveContainer" containerID="2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.632661 5031 scope.go:117] "RemoveContainer" containerID="b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.659502 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl9lt"] Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.675095 5031 scope.go:117] "RemoveContainer" containerID="875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.676981 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vl9lt"] Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.710949 5031 scope.go:117] "RemoveContainer" containerID="2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3" Jan 29 09:37:56 crc kubenswrapper[5031]: E0129 09:37:56.712104 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3\": container with ID starting with 2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3 not found: ID does not exist" containerID="2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.712191 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3"} err="failed to get container status \"2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3\": rpc error: code = NotFound desc = could not find container \"2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3\": container with ID starting with 2f55c7ab02dd6da40eeb61e735b9287d22e307616e4bca7a064fdf3ad88c62f3 not found: ID does not exist" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.712243 5031 scope.go:117] "RemoveContainer" containerID="b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0" Jan 29 09:37:56 crc kubenswrapper[5031]: E0129 09:37:56.712791 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0\": container with ID starting with b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0 not found: ID does not exist" containerID="b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.712839 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0"} err="failed to get container status \"b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0\": rpc error: code = NotFound desc = could not find container \"b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0\": container with ID starting with b8b0170210ff30e86d3c7d1105da561128428bd04a4a8e6352f62af21f1d0ec0 not found: ID does not exist" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.712879 5031 scope.go:117] "RemoveContainer" containerID="875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565" Jan 29 09:37:56 crc kubenswrapper[5031]: E0129 09:37:56.713403 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565\": container with ID starting with 875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565 not found: ID does not exist" containerID="875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565" Jan 29 09:37:56 crc kubenswrapper[5031]: I0129 09:37:56.713432 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565"} err="failed to get container status \"875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565\": rpc error: code = NotFound desc = could not find container \"875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565\": container with ID starting with 875d6dee1854b2d1338e353d2554f26a47ae78919598e065de70da56ad1f1565 not found: ID does not exist" Jan 29 09:37:58 crc kubenswrapper[5031]: I0129 09:37:58.295067 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" path="/var/lib/kubelet/pods/e6afbb14-cb63-478b-b8f8-a979a71e3466/volumes" Jan 29 09:38:07 crc kubenswrapper[5031]: I0129 09:38:07.892845 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kn9ds_66c6d48a-bdee-4f5b-b0ca-da05372e1ba2/control-plane-machine-set-operator/0.log" Jan 29 09:38:08 crc kubenswrapper[5031]: I0129 09:38:08.081402 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-w2sql_8a3bbd5e-4071-4761-b455-e830e12dfa81/kube-rbac-proxy/0.log" Jan 29 09:38:08 crc kubenswrapper[5031]: I0129 09:38:08.136825 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-w2sql_8a3bbd5e-4071-4761-b455-e830e12dfa81/machine-api-operator/0.log" Jan 29 09:38:08 crc kubenswrapper[5031]: I0129 09:38:08.493729 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:38:08 crc kubenswrapper[5031]: I0129 09:38:08.493779 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:38:20 crc kubenswrapper[5031]: I0129 09:38:20.720612 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hfrt9_18d66dd7-f94a-41fd-9d04-f09c1cea0e58/cert-manager-controller/0.log" Jan 29 09:38:20 crc kubenswrapper[5031]: I0129 09:38:20.904626 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-l47tb_f62b13b3-ff83-4f97-a291-8067c9f5cdc9/cert-manager-cainjector/0.log" Jan 29 09:38:20 crc kubenswrapper[5031]: I0129 09:38:20.966416 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ff66k_8983adca-9e9f-4d65-9ae5-091fa81877a0/cert-manager-webhook/0.log" Jan 29 09:38:33 crc kubenswrapper[5031]: I0129 09:38:33.693668 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gcrhb_5c55f203-c18f-402b-a766-a1f291a5b3dc/nmstate-console-plugin/0.log" Jan 29 09:38:33 crc kubenswrapper[5031]: I0129 09:38:33.862090 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wzjdc_21eadbd2-15f3-47aa-8428-fb22325e29a6/nmstate-handler/0.log" Jan 29 09:38:33 crc kubenswrapper[5031]: I0129 09:38:33.903437 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-w2269_27616237-18b5-463e-be46-59392bbff884/kube-rbac-proxy/0.log" Jan 29 09:38:33 crc kubenswrapper[5031]: I0129 09:38:33.960328 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-w2269_27616237-18b5-463e-be46-59392bbff884/nmstate-metrics/0.log" Jan 29 09:38:34 crc kubenswrapper[5031]: I0129 09:38:34.095604 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-vdbl7_1e390d20-964f-4337-a396-d56cf85b5a4d/nmstate-operator/0.log" Jan 29 09:38:34 crc kubenswrapper[5031]: I0129 09:38:34.114247 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-scf9x_2a6126a5-5e52-418a-ba32-ce426e8ce58c/nmstate-webhook/0.log" Jan 29 09:38:38 crc kubenswrapper[5031]: I0129 09:38:38.493749 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:38:38 crc kubenswrapper[5031]: I0129 09:38:38.494407 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:39:01 crc kubenswrapper[5031]: I0129 09:39:01.479741 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2ls2g_d0fae1e4-5509-482f-9430-17a7148dc235/controller/0.log" Jan 29 09:39:01 crc kubenswrapper[5031]: I0129 09:39:01.504923 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2ls2g_d0fae1e4-5509-482f-9430-17a7148dc235/kube-rbac-proxy/0.log" Jan 29 09:39:01 crc kubenswrapper[5031]: I0129 09:39:01.700900 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:39:01 crc kubenswrapper[5031]: I0129 09:39:01.913060 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:39:01 crc kubenswrapper[5031]: I0129 09:39:01.938497 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:39:01 crc kubenswrapper[5031]: I0129 09:39:01.959221 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:39:01 crc kubenswrapper[5031]: I0129 09:39:01.959974 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.096832 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.100646 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.151901 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.159930 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.362419 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.362839 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.381658 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.412468 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/controller/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.552158 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/kube-rbac-proxy/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.604186 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/frr-metrics/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.638246 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/kube-rbac-proxy-frr/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.794927 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/reloader/0.log" Jan 29 09:39:02 crc kubenswrapper[5031]: I0129 09:39:02.868280 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7pdgn_4fef4c25-5a46-45ba-bc17-fe5696028ac9/frr-k8s-webhook-server/0.log" Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.032573 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7777f7948d-dxh4l_417f7fc8-934e-415e-89cc-fb09ba21917e/manager/0.log" Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.042421 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-ba01-account-create-update-j9tfj"] Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.053917 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-2knmh"] Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.062651 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-2knmh"] Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.070563 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-ba01-account-create-update-j9tfj"] Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.250865 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7d7d76dfc-zj8mx_729c722e-e67a-4ff6-a4cf-0f6a68fd2c66/webhook-server/0.log" Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.402677 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dsws8_28efe09e-8a3b-4a66-8818-18a1bc11b34d/kube-rbac-proxy/0.log" Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.899773 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dsws8_28efe09e-8a3b-4a66-8818-18a1bc11b34d/speaker/0.log" Jan 29 09:39:03 crc kubenswrapper[5031]: I0129 09:39:03.939600 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/frr/0.log" Jan 29 09:39:04 crc kubenswrapper[5031]: I0129 09:39:04.293432 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c135329-1c87-495b-affc-91c0520b26ba" path="/var/lib/kubelet/pods/1c135329-1c87-495b-affc-91c0520b26ba/volumes" Jan 29 09:39:04 crc kubenswrapper[5031]: I0129 09:39:04.294819 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6022f9c4-3a0d-4f89-881d-b6a17970ac9b" path="/var/lib/kubelet/pods/6022f9c4-3a0d-4f89-881d-b6a17970ac9b/volumes" Jan 29 09:39:08 crc kubenswrapper[5031]: I0129 09:39:08.493743 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:39:08 crc kubenswrapper[5031]: I0129 09:39:08.494268 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:39:08 crc kubenswrapper[5031]: I0129 09:39:08.494323 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:39:08 crc kubenswrapper[5031]: I0129 09:39:08.495200 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:39:08 crc kubenswrapper[5031]: I0129 09:39:08.495263 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" gracePeriod=600 Jan 29 09:39:08 crc kubenswrapper[5031]: E0129 09:39:08.618528 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:39:09 crc kubenswrapper[5031]: I0129 09:39:09.392808 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" exitCode=0 Jan 29 09:39:09 crc kubenswrapper[5031]: I0129 09:39:09.392899 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3"} Jan 29 09:39:09 crc kubenswrapper[5031]: I0129 09:39:09.393231 5031 scope.go:117] "RemoveContainer" containerID="a7270cec15a957c2029d22962e4647ab60cfb192751d9117ef305ce5cc990f36" Jan 29 09:39:09 crc kubenswrapper[5031]: I0129 09:39:09.393909 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:39:09 crc kubenswrapper[5031]: E0129 09:39:09.394227 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:39:10 crc kubenswrapper[5031]: I0129 09:39:10.767244 5031 scope.go:117] "RemoveContainer" containerID="47856d2b5ccfd3cd7354ff707a81e3b60c816732e50e53884cc8c7984aa4d65b" Jan 29 09:39:10 crc kubenswrapper[5031]: I0129 09:39:10.811160 5031 scope.go:117] "RemoveContainer" containerID="311cc19d968bed58031f1a386154be57017827571baf518abc59dafda135a65c" Jan 29 09:39:16 crc kubenswrapper[5031]: I0129 09:39:16.735895 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/util/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.001783 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/util/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.006287 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/pull/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.017573 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/pull/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.212361 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/pull/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.241052 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/util/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.280779 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/extract/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.429545 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/util/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.611979 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/pull/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.643583 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/pull/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.662142 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/util/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.860797 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/pull/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.882305 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/util/0.log" Jan 29 09:39:17 crc kubenswrapper[5031]: I0129 09:39:17.932745 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/extract/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.069199 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-utilities/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.243718 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-utilities/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.257518 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-content/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.265577 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-content/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.418837 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-utilities/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.462280 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-content/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.652091 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-utilities/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.652822 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/registry-server/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.905650 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-utilities/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.937938 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-content/0.log" Jan 29 09:39:18 crc kubenswrapper[5031]: I0129 09:39:18.948289 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-content/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.144538 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-content/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.158565 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-utilities/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.507388 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qjfs_75a63559-30d6-47bc-9f30-5385de9826f0/marketplace-operator/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.525601 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-utilities/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.704969 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/registry-server/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.725934 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-utilities/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.734786 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-content/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.773314 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-content/0.log" Jan 29 09:39:19 crc kubenswrapper[5031]: I0129 09:39:19.964698 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-utilities/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.005199 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-content/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.122670 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/registry-server/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.165722 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-utilities/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.406701 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-content/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.442397 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-utilities/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.498486 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-content/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.638807 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-content/0.log" Jan 29 09:39:20 crc kubenswrapper[5031]: I0129 09:39:20.675224 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-utilities/0.log" Jan 29 09:39:21 crc kubenswrapper[5031]: I0129 09:39:21.208811 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/registry-server/0.log" Jan 29 09:39:23 crc kubenswrapper[5031]: I0129 09:39:23.282437 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:39:23 crc kubenswrapper[5031]: E0129 09:39:23.283291 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:39:31 crc kubenswrapper[5031]: I0129 09:39:31.048631 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-fmrct"] Jan 29 09:39:31 crc kubenswrapper[5031]: I0129 09:39:31.057246 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-fmrct"] Jan 29 09:39:32 crc kubenswrapper[5031]: I0129 09:39:32.296502 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73da3d2b-eb56-4382-9091-6d353d461127" path="/var/lib/kubelet/pods/73da3d2b-eb56-4382-9091-6d353d461127/volumes" Jan 29 09:39:36 crc kubenswrapper[5031]: I0129 09:39:36.283067 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:39:36 crc kubenswrapper[5031]: E0129 09:39:36.284999 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:39:51 crc kubenswrapper[5031]: I0129 09:39:51.282083 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:39:51 crc kubenswrapper[5031]: E0129 09:39:51.282728 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:39:58 crc kubenswrapper[5031]: E0129 09:39:58.938589 5031 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.153:58490->38.129.56.153:38585: write tcp 38.129.56.153:58490->38.129.56.153:38585: write: broken pipe Jan 29 09:40:06 crc kubenswrapper[5031]: I0129 09:40:06.283074 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:40:06 crc kubenswrapper[5031]: E0129 09:40:06.283948 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:40:10 crc kubenswrapper[5031]: I0129 09:40:10.901385 5031 scope.go:117] "RemoveContainer" containerID="82a4f282acb27b575f301c924a204e3ba6d40f2b111b0191d052f1ebbc322763" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.559924 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f4pll"] Jan 29 09:40:17 crc kubenswrapper[5031]: E0129 09:40:17.562240 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="registry-server" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.562268 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="registry-server" Jan 29 09:40:17 crc kubenswrapper[5031]: E0129 09:40:17.562296 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="extract-utilities" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.562306 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="extract-utilities" Jan 29 09:40:17 crc kubenswrapper[5031]: E0129 09:40:17.562314 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="extract-content" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.562320 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="extract-content" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.562519 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6afbb14-cb63-478b-b8f8-a979a71e3466" containerName="registry-server" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.564104 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.577514 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4pll"] Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.702094 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-catalog-content\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.702684 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c2rq\" (UniqueName: \"kubernetes.io/projected/6e040a4f-99b8-4622-8592-2007d5ede297-kube-api-access-5c2rq\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.702734 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-utilities\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.805080 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c2rq\" (UniqueName: \"kubernetes.io/projected/6e040a4f-99b8-4622-8592-2007d5ede297-kube-api-access-5c2rq\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.805142 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-utilities\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.805236 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-catalog-content\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.805659 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-utilities\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.805710 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-catalog-content\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.828101 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c2rq\" (UniqueName: \"kubernetes.io/projected/6e040a4f-99b8-4622-8592-2007d5ede297-kube-api-access-5c2rq\") pod \"redhat-marketplace-f4pll\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:17 crc kubenswrapper[5031]: I0129 09:40:17.892823 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:18 crc kubenswrapper[5031]: I0129 09:40:18.555552 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4pll"] Jan 29 09:40:19 crc kubenswrapper[5031]: I0129 09:40:19.253647 5031 generic.go:334] "Generic (PLEG): container finished" podID="6e040a4f-99b8-4622-8592-2007d5ede297" containerID="01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0" exitCode=0 Jan 29 09:40:19 crc kubenswrapper[5031]: I0129 09:40:19.253936 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4pll" event={"ID":"6e040a4f-99b8-4622-8592-2007d5ede297","Type":"ContainerDied","Data":"01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0"} Jan 29 09:40:19 crc kubenswrapper[5031]: I0129 09:40:19.253969 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4pll" event={"ID":"6e040a4f-99b8-4622-8592-2007d5ede297","Type":"ContainerStarted","Data":"3f49ab639d43398d101bf812343796ad61984a9848f4d7d9538ca3455c02a269"} Jan 29 09:40:19 crc kubenswrapper[5031]: I0129 09:40:19.283932 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:40:19 crc kubenswrapper[5031]: E0129 09:40:19.284222 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:40:20 crc kubenswrapper[5031]: I0129 09:40:20.267495 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4pll" event={"ID":"6e040a4f-99b8-4622-8592-2007d5ede297","Type":"ContainerStarted","Data":"4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742"} Jan 29 09:40:21 crc kubenswrapper[5031]: I0129 09:40:21.279457 5031 generic.go:334] "Generic (PLEG): container finished" podID="6e040a4f-99b8-4622-8592-2007d5ede297" containerID="4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742" exitCode=0 Jan 29 09:40:21 crc kubenswrapper[5031]: I0129 09:40:21.279551 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4pll" event={"ID":"6e040a4f-99b8-4622-8592-2007d5ede297","Type":"ContainerDied","Data":"4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742"} Jan 29 09:40:22 crc kubenswrapper[5031]: I0129 09:40:22.303193 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4pll" event={"ID":"6e040a4f-99b8-4622-8592-2007d5ede297","Type":"ContainerStarted","Data":"cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d"} Jan 29 09:40:22 crc kubenswrapper[5031]: I0129 09:40:22.351717 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f4pll" podStartSLOduration=2.802135469 podStartE2EDuration="5.351694728s" podCreationTimestamp="2026-01-29 09:40:17 +0000 UTC" firstStartedPulling="2026-01-29 09:40:19.259677249 +0000 UTC m=+3699.759265211" lastFinishedPulling="2026-01-29 09:40:21.809236498 +0000 UTC m=+3702.308824470" observedRunningTime="2026-01-29 09:40:22.318534058 +0000 UTC m=+3702.818122020" watchObservedRunningTime="2026-01-29 09:40:22.351694728 +0000 UTC m=+3702.851282680" Jan 29 09:40:27 crc kubenswrapper[5031]: I0129 09:40:27.893720 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:27 crc kubenswrapper[5031]: I0129 09:40:27.894253 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:27 crc kubenswrapper[5031]: I0129 09:40:27.958712 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:28 crc kubenswrapper[5031]: I0129 09:40:28.420422 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:28 crc kubenswrapper[5031]: I0129 09:40:28.489139 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4pll"] Jan 29 09:40:30 crc kubenswrapper[5031]: I0129 09:40:30.290976 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:40:30 crc kubenswrapper[5031]: E0129 09:40:30.293029 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:40:30 crc kubenswrapper[5031]: I0129 09:40:30.366941 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f4pll" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="registry-server" containerID="cri-o://cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d" gracePeriod=2 Jan 29 09:40:30 crc kubenswrapper[5031]: I0129 09:40:30.932428 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.045213 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c2rq\" (UniqueName: \"kubernetes.io/projected/6e040a4f-99b8-4622-8592-2007d5ede297-kube-api-access-5c2rq\") pod \"6e040a4f-99b8-4622-8592-2007d5ede297\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.045345 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-catalog-content\") pod \"6e040a4f-99b8-4622-8592-2007d5ede297\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.045491 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-utilities\") pod \"6e040a4f-99b8-4622-8592-2007d5ede297\" (UID: \"6e040a4f-99b8-4622-8592-2007d5ede297\") " Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.046964 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-utilities" (OuterVolumeSpecName: "utilities") pod "6e040a4f-99b8-4622-8592-2007d5ede297" (UID: "6e040a4f-99b8-4622-8592-2007d5ede297"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.061606 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e040a4f-99b8-4622-8592-2007d5ede297-kube-api-access-5c2rq" (OuterVolumeSpecName: "kube-api-access-5c2rq") pod "6e040a4f-99b8-4622-8592-2007d5ede297" (UID: "6e040a4f-99b8-4622-8592-2007d5ede297"). InnerVolumeSpecName "kube-api-access-5c2rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.079531 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e040a4f-99b8-4622-8592-2007d5ede297" (UID: "6e040a4f-99b8-4622-8592-2007d5ede297"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.148621 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.148653 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c2rq\" (UniqueName: \"kubernetes.io/projected/6e040a4f-99b8-4622-8592-2007d5ede297-kube-api-access-5c2rq\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.148662 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e040a4f-99b8-4622-8592-2007d5ede297-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.377602 5031 generic.go:334] "Generic (PLEG): container finished" podID="6e040a4f-99b8-4622-8592-2007d5ede297" containerID="cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d" exitCode=0 Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.377665 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4pll" event={"ID":"6e040a4f-99b8-4622-8592-2007d5ede297","Type":"ContainerDied","Data":"cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d"} Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.377990 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4pll" event={"ID":"6e040a4f-99b8-4622-8592-2007d5ede297","Type":"ContainerDied","Data":"3f49ab639d43398d101bf812343796ad61984a9848f4d7d9538ca3455c02a269"} Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.378015 5031 scope.go:117] "RemoveContainer" containerID="cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.377679 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4pll" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.404247 5031 scope.go:117] "RemoveContainer" containerID="4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.443202 5031 scope.go:117] "RemoveContainer" containerID="01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.458528 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4pll"] Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.476241 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4pll"] Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.499417 5031 scope.go:117] "RemoveContainer" containerID="cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d" Jan 29 09:40:31 crc kubenswrapper[5031]: E0129 09:40:31.500071 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d\": container with ID starting with cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d not found: ID does not exist" containerID="cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.500152 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d"} err="failed to get container status \"cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d\": rpc error: code = NotFound desc = could not find container \"cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d\": container with ID starting with cead9b996ba987a45bcdde98d12ed639c49b1ec470933f56d6532b8990e09b8d not found: ID does not exist" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.500178 5031 scope.go:117] "RemoveContainer" containerID="4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742" Jan 29 09:40:31 crc kubenswrapper[5031]: E0129 09:40:31.500619 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742\": container with ID starting with 4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742 not found: ID does not exist" containerID="4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.500640 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742"} err="failed to get container status \"4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742\": rpc error: code = NotFound desc = could not find container \"4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742\": container with ID starting with 4935bbcbaa1525ee597a4ca4c870c881285dd72c663044e84c2fa47d8c93e742 not found: ID does not exist" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.500657 5031 scope.go:117] "RemoveContainer" containerID="01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0" Jan 29 09:40:31 crc kubenswrapper[5031]: E0129 09:40:31.500889 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0\": container with ID starting with 01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0 not found: ID does not exist" containerID="01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0" Jan 29 09:40:31 crc kubenswrapper[5031]: I0129 09:40:31.500908 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0"} err="failed to get container status \"01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0\": rpc error: code = NotFound desc = could not find container \"01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0\": container with ID starting with 01ccbfa8e59fe7276e750ce6065e80a5c24a17aa938ba413f626f0474c3d04c0 not found: ID does not exist" Jan 29 09:40:32 crc kubenswrapper[5031]: I0129 09:40:32.308466 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" path="/var/lib/kubelet/pods/6e040a4f-99b8-4622-8592-2007d5ede297/volumes" Jan 29 09:40:41 crc kubenswrapper[5031]: I0129 09:40:41.282456 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:40:41 crc kubenswrapper[5031]: E0129 09:40:41.283323 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:40:56 crc kubenswrapper[5031]: I0129 09:40:56.282914 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:40:56 crc kubenswrapper[5031]: E0129 09:40:56.283761 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:41:10 crc kubenswrapper[5031]: I0129 09:41:10.294254 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:41:10 crc kubenswrapper[5031]: E0129 09:41:10.296234 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:41:10 crc kubenswrapper[5031]: I0129 09:41:10.900398 5031 generic.go:334] "Generic (PLEG): container finished" podID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerID="4de47a3471731543ccff8cb637efdf4a3b065581db469ee5d79768652f9c3f3b" exitCode=0 Jan 29 09:41:10 crc kubenswrapper[5031]: I0129 09:41:10.900486 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v86rb/must-gather-mmfsw" event={"ID":"586d3fab-ba5c-42ed-8ff8-4052ec209fa9","Type":"ContainerDied","Data":"4de47a3471731543ccff8cb637efdf4a3b065581db469ee5d79768652f9c3f3b"} Jan 29 09:41:10 crc kubenswrapper[5031]: I0129 09:41:10.901713 5031 scope.go:117] "RemoveContainer" containerID="4de47a3471731543ccff8cb637efdf4a3b065581db469ee5d79768652f9c3f3b" Jan 29 09:41:11 crc kubenswrapper[5031]: I0129 09:41:11.591589 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v86rb_must-gather-mmfsw_586d3fab-ba5c-42ed-8ff8-4052ec209fa9/gather/0.log" Jan 29 09:41:19 crc kubenswrapper[5031]: I0129 09:41:19.616192 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v86rb/must-gather-mmfsw"] Jan 29 09:41:19 crc kubenswrapper[5031]: I0129 09:41:19.616964 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v86rb/must-gather-mmfsw" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerName="copy" containerID="cri-o://60eaa7b692c9ea49b1a4fb35dc21c0f3de7cb4ace9f7971095a9aa22cddae5af" gracePeriod=2 Jan 29 09:41:19 crc kubenswrapper[5031]: I0129 09:41:19.626498 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v86rb/must-gather-mmfsw"] Jan 29 09:41:19 crc kubenswrapper[5031]: I0129 09:41:19.972904 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v86rb_must-gather-mmfsw_586d3fab-ba5c-42ed-8ff8-4052ec209fa9/copy/0.log" Jan 29 09:41:19 crc kubenswrapper[5031]: I0129 09:41:19.973795 5031 generic.go:334] "Generic (PLEG): container finished" podID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerID="60eaa7b692c9ea49b1a4fb35dc21c0f3de7cb4ace9f7971095a9aa22cddae5af" exitCode=143 Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.104009 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v86rb_must-gather-mmfsw_586d3fab-ba5c-42ed-8ff8-4052ec209fa9/copy/0.log" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.104518 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.154539 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25282\" (UniqueName: \"kubernetes.io/projected/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-kube-api-access-25282\") pod \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.154663 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-must-gather-output\") pod \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\" (UID: \"586d3fab-ba5c-42ed-8ff8-4052ec209fa9\") " Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.160154 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-kube-api-access-25282" (OuterVolumeSpecName: "kube-api-access-25282") pod "586d3fab-ba5c-42ed-8ff8-4052ec209fa9" (UID: "586d3fab-ba5c-42ed-8ff8-4052ec209fa9"). InnerVolumeSpecName "kube-api-access-25282". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.256455 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25282\" (UniqueName: \"kubernetes.io/projected/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-kube-api-access-25282\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.322844 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "586d3fab-ba5c-42ed-8ff8-4052ec209fa9" (UID: "586d3fab-ba5c-42ed-8ff8-4052ec209fa9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.358214 5031 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586d3fab-ba5c-42ed-8ff8-4052ec209fa9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.982729 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v86rb_must-gather-mmfsw_586d3fab-ba5c-42ed-8ff8-4052ec209fa9/copy/0.log" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.984713 5031 scope.go:117] "RemoveContainer" containerID="60eaa7b692c9ea49b1a4fb35dc21c0f3de7cb4ace9f7971095a9aa22cddae5af" Jan 29 09:41:20 crc kubenswrapper[5031]: I0129 09:41:20.984881 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v86rb/must-gather-mmfsw" Jan 29 09:41:21 crc kubenswrapper[5031]: I0129 09:41:21.008265 5031 scope.go:117] "RemoveContainer" containerID="4de47a3471731543ccff8cb637efdf4a3b065581db469ee5d79768652f9c3f3b" Jan 29 09:41:22 crc kubenswrapper[5031]: I0129 09:41:22.294654 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" path="/var/lib/kubelet/pods/586d3fab-ba5c-42ed-8ff8-4052ec209fa9/volumes" Jan 29 09:41:25 crc kubenswrapper[5031]: I0129 09:41:25.283305 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:41:25 crc kubenswrapper[5031]: E0129 09:41:25.283953 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:41:39 crc kubenswrapper[5031]: I0129 09:41:39.283002 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:41:39 crc kubenswrapper[5031]: E0129 09:41:39.283989 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:41:54 crc kubenswrapper[5031]: I0129 09:41:54.282748 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:41:54 crc kubenswrapper[5031]: E0129 09:41:54.283610 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:42:07 crc kubenswrapper[5031]: I0129 09:42:07.282586 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:42:07 crc kubenswrapper[5031]: E0129 09:42:07.283605 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:42:19 crc kubenswrapper[5031]: I0129 09:42:19.283302 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:42:19 crc kubenswrapper[5031]: E0129 09:42:19.284300 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:42:34 crc kubenswrapper[5031]: I0129 09:42:34.282975 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:42:34 crc kubenswrapper[5031]: E0129 09:42:34.283992 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:42:46 crc kubenswrapper[5031]: I0129 09:42:46.283112 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:42:46 crc kubenswrapper[5031]: E0129 09:42:46.283945 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:42:59 crc kubenswrapper[5031]: I0129 09:42:59.284063 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:42:59 crc kubenswrapper[5031]: E0129 09:42:59.284774 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:43:12 crc kubenswrapper[5031]: I0129 09:43:12.283312 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:43:12 crc kubenswrapper[5031]: E0129 09:43:12.284744 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:43:26 crc kubenswrapper[5031]: I0129 09:43:26.283070 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:43:26 crc kubenswrapper[5031]: E0129 09:43:26.284395 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:43:39 crc kubenswrapper[5031]: I0129 09:43:39.285015 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:43:39 crc kubenswrapper[5031]: E0129 09:43:39.286426 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:43:52 crc kubenswrapper[5031]: I0129 09:43:52.282824 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:43:52 crc kubenswrapper[5031]: E0129 09:43:52.283778 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.172029 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nvfnq"] Jan 29 09:43:54 crc kubenswrapper[5031]: E0129 09:43:54.172960 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="extract-utilities" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.172978 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="extract-utilities" Jan 29 09:43:54 crc kubenswrapper[5031]: E0129 09:43:54.172998 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="extract-content" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.173012 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="extract-content" Jan 29 09:43:54 crc kubenswrapper[5031]: E0129 09:43:54.173029 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerName="copy" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.173039 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerName="copy" Jan 29 09:43:54 crc kubenswrapper[5031]: E0129 09:43:54.173067 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="registry-server" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.173076 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="registry-server" Jan 29 09:43:54 crc kubenswrapper[5031]: E0129 09:43:54.173096 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerName="gather" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.173106 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerName="gather" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.173349 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e040a4f-99b8-4622-8592-2007d5ede297" containerName="registry-server" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.173478 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerName="copy" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.173554 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="586d3fab-ba5c-42ed-8ff8-4052ec209fa9" containerName="gather" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.175220 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.198993 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvfnq"] Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.209297 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kk4s\" (UniqueName: \"kubernetes.io/projected/c292fe98-914e-4596-b027-2f3b8d9338b6-kube-api-access-2kk4s\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.209408 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-catalog-content\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.209579 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-utilities\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.311630 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kk4s\" (UniqueName: \"kubernetes.io/projected/c292fe98-914e-4596-b027-2f3b8d9338b6-kube-api-access-2kk4s\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.311721 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-catalog-content\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.311750 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-utilities\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.312392 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-catalog-content\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.312410 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-utilities\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.460515 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kk4s\" (UniqueName: \"kubernetes.io/projected/c292fe98-914e-4596-b027-2f3b8d9338b6-kube-api-access-2kk4s\") pod \"redhat-operators-nvfnq\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:54 crc kubenswrapper[5031]: I0129 09:43:54.505202 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:43:55 crc kubenswrapper[5031]: I0129 09:43:55.060092 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvfnq"] Jan 29 09:43:55 crc kubenswrapper[5031]: I0129 09:43:55.957289 5031 generic.go:334] "Generic (PLEG): container finished" podID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerID="7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf" exitCode=0 Jan 29 09:43:55 crc kubenswrapper[5031]: I0129 09:43:55.957577 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvfnq" event={"ID":"c292fe98-914e-4596-b027-2f3b8d9338b6","Type":"ContainerDied","Data":"7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf"} Jan 29 09:43:55 crc kubenswrapper[5031]: I0129 09:43:55.957600 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvfnq" event={"ID":"c292fe98-914e-4596-b027-2f3b8d9338b6","Type":"ContainerStarted","Data":"039f19c49ef6fb8cfba068ecfcb7745926d8e4f020e9d1c3f5480676f3fbc2d1"} Jan 29 09:43:55 crc kubenswrapper[5031]: I0129 09:43:55.959331 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:43:57 crc kubenswrapper[5031]: I0129 09:43:57.979921 5031 generic.go:334] "Generic (PLEG): container finished" podID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerID="e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047" exitCode=0 Jan 29 09:43:57 crc kubenswrapper[5031]: I0129 09:43:57.980048 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvfnq" event={"ID":"c292fe98-914e-4596-b027-2f3b8d9338b6","Type":"ContainerDied","Data":"e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047"} Jan 29 09:43:58 crc kubenswrapper[5031]: I0129 09:43:58.992908 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvfnq" event={"ID":"c292fe98-914e-4596-b027-2f3b8d9338b6","Type":"ContainerStarted","Data":"c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b"} Jan 29 09:43:59 crc kubenswrapper[5031]: I0129 09:43:59.038594 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nvfnq" podStartSLOduration=2.656237121 podStartE2EDuration="5.038517683s" podCreationTimestamp="2026-01-29 09:43:54 +0000 UTC" firstStartedPulling="2026-01-29 09:43:55.959136004 +0000 UTC m=+3916.458723956" lastFinishedPulling="2026-01-29 09:43:58.341416526 +0000 UTC m=+3918.841004518" observedRunningTime="2026-01-29 09:43:59.016867003 +0000 UTC m=+3919.516454965" watchObservedRunningTime="2026-01-29 09:43:59.038517683 +0000 UTC m=+3919.538105675" Jan 29 09:44:03 crc kubenswrapper[5031]: I0129 09:44:03.282636 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:44:03 crc kubenswrapper[5031]: E0129 09:44:03.284550 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:44:04 crc kubenswrapper[5031]: I0129 09:44:04.506149 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:44:04 crc kubenswrapper[5031]: I0129 09:44:04.506521 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:44:05 crc kubenswrapper[5031]: I0129 09:44:05.560870 5031 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvfnq" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="registry-server" probeResult="failure" output=< Jan 29 09:44:05 crc kubenswrapper[5031]: timeout: failed to connect service ":50051" within 1s Jan 29 09:44:05 crc kubenswrapper[5031]: > Jan 29 09:44:14 crc kubenswrapper[5031]: I0129 09:44:14.282684 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:44:14 crc kubenswrapper[5031]: I0129 09:44:14.555948 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:44:14 crc kubenswrapper[5031]: I0129 09:44:14.608099 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:44:14 crc kubenswrapper[5031]: I0129 09:44:14.793138 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvfnq"] Jan 29 09:44:15 crc kubenswrapper[5031]: I0129 09:44:15.129534 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"25d3c4dfc92bf39011e601e057af1e68b30d01be5281c5cf5375ff05644ea177"} Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.138183 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nvfnq" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="registry-server" containerID="cri-o://c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b" gracePeriod=2 Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.638290 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.770904 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-utilities\") pod \"c292fe98-914e-4596-b027-2f3b8d9338b6\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.771043 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kk4s\" (UniqueName: \"kubernetes.io/projected/c292fe98-914e-4596-b027-2f3b8d9338b6-kube-api-access-2kk4s\") pod \"c292fe98-914e-4596-b027-2f3b8d9338b6\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.771120 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-catalog-content\") pod \"c292fe98-914e-4596-b027-2f3b8d9338b6\" (UID: \"c292fe98-914e-4596-b027-2f3b8d9338b6\") " Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.772498 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-utilities" (OuterVolumeSpecName: "utilities") pod "c292fe98-914e-4596-b027-2f3b8d9338b6" (UID: "c292fe98-914e-4596-b027-2f3b8d9338b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.780238 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c292fe98-914e-4596-b027-2f3b8d9338b6-kube-api-access-2kk4s" (OuterVolumeSpecName: "kube-api-access-2kk4s") pod "c292fe98-914e-4596-b027-2f3b8d9338b6" (UID: "c292fe98-914e-4596-b027-2f3b8d9338b6"). InnerVolumeSpecName "kube-api-access-2kk4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.873515 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.873555 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kk4s\" (UniqueName: \"kubernetes.io/projected/c292fe98-914e-4596-b027-2f3b8d9338b6-kube-api-access-2kk4s\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.906636 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c292fe98-914e-4596-b027-2f3b8d9338b6" (UID: "c292fe98-914e-4596-b027-2f3b8d9338b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:44:16 crc kubenswrapper[5031]: I0129 09:44:16.975155 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c292fe98-914e-4596-b027-2f3b8d9338b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.149479 5031 generic.go:334] "Generic (PLEG): container finished" podID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerID="c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b" exitCode=0 Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.149576 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvfnq" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.149569 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvfnq" event={"ID":"c292fe98-914e-4596-b027-2f3b8d9338b6","Type":"ContainerDied","Data":"c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b"} Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.149966 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvfnq" event={"ID":"c292fe98-914e-4596-b027-2f3b8d9338b6","Type":"ContainerDied","Data":"039f19c49ef6fb8cfba068ecfcb7745926d8e4f020e9d1c3f5480676f3fbc2d1"} Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.149993 5031 scope.go:117] "RemoveContainer" containerID="c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.172353 5031 scope.go:117] "RemoveContainer" containerID="e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.193592 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvfnq"] Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.203142 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nvfnq"] Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.211519 5031 scope.go:117] "RemoveContainer" containerID="7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.244547 5031 scope.go:117] "RemoveContainer" containerID="c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b" Jan 29 09:44:17 crc kubenswrapper[5031]: E0129 09:44:17.245823 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b\": container with ID starting with c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b not found: ID does not exist" containerID="c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.245868 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b"} err="failed to get container status \"c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b\": rpc error: code = NotFound desc = could not find container \"c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b\": container with ID starting with c73062e24082a8ae02b625d5b534fea7a45b55c7b6dd4119b115b3694b12b88b not found: ID does not exist" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.245894 5031 scope.go:117] "RemoveContainer" containerID="e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047" Jan 29 09:44:17 crc kubenswrapper[5031]: E0129 09:44:17.246411 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047\": container with ID starting with e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047 not found: ID does not exist" containerID="e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.246469 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047"} err="failed to get container status \"e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047\": rpc error: code = NotFound desc = could not find container \"e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047\": container with ID starting with e8677a0a4e4a294777b65170ba175282f0e0ec98e360f015b843a723ca315047 not found: ID does not exist" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.246503 5031 scope.go:117] "RemoveContainer" containerID="7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf" Jan 29 09:44:17 crc kubenswrapper[5031]: E0129 09:44:17.247838 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf\": container with ID starting with 7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf not found: ID does not exist" containerID="7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf" Jan 29 09:44:17 crc kubenswrapper[5031]: I0129 09:44:17.247869 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf"} err="failed to get container status \"7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf\": rpc error: code = NotFound desc = could not find container \"7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf\": container with ID starting with 7118a0019db918d2bcd7feb314ab0a9267d55067d875f22728aa6cb627f746cf not found: ID does not exist" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.292684 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" path="/var/lib/kubelet/pods/c292fe98-914e-4596-b027-2f3b8d9338b6/volumes" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.301121 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76trw/must-gather-v6pbp"] Jan 29 09:44:18 crc kubenswrapper[5031]: E0129 09:44:18.301639 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="extract-content" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.301659 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="extract-content" Jan 29 09:44:18 crc kubenswrapper[5031]: E0129 09:44:18.301775 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="registry-server" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.301791 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="registry-server" Jan 29 09:44:18 crc kubenswrapper[5031]: E0129 09:44:18.301807 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="extract-utilities" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.301816 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="extract-utilities" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.302041 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="c292fe98-914e-4596-b027-2f3b8d9338b6" containerName="registry-server" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.303524 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.306732 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-76trw"/"openshift-service-ca.crt" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.307581 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-76trw"/"default-dockercfg-ggdr4" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.323242 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-76trw"/"kube-root-ca.crt" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.330483 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-76trw/must-gather-v6pbp"] Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.404581 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkpp\" (UniqueName: \"kubernetes.io/projected/a5df2e74-662a-4b66-9ccc-93c1eac717b8-kube-api-access-qlkpp\") pod \"must-gather-v6pbp\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.404914 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5df2e74-662a-4b66-9ccc-93c1eac717b8-must-gather-output\") pod \"must-gather-v6pbp\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.506960 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlkpp\" (UniqueName: \"kubernetes.io/projected/a5df2e74-662a-4b66-9ccc-93c1eac717b8-kube-api-access-qlkpp\") pod \"must-gather-v6pbp\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.507707 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5df2e74-662a-4b66-9ccc-93c1eac717b8-must-gather-output\") pod \"must-gather-v6pbp\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.508185 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5df2e74-662a-4b66-9ccc-93c1eac717b8-must-gather-output\") pod \"must-gather-v6pbp\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.529312 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlkpp\" (UniqueName: \"kubernetes.io/projected/a5df2e74-662a-4b66-9ccc-93c1eac717b8-kube-api-access-qlkpp\") pod \"must-gather-v6pbp\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:18 crc kubenswrapper[5031]: I0129 09:44:18.622830 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:44:19 crc kubenswrapper[5031]: I0129 09:44:19.087659 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-76trw/must-gather-v6pbp"] Jan 29 09:44:19 crc kubenswrapper[5031]: W0129 09:44:19.103267 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5df2e74_662a_4b66_9ccc_93c1eac717b8.slice/crio-4dd5044050a0e7b20e2106de82973cec107f8d31ecfce93e36696c13fec8ab12 WatchSource:0}: Error finding container 4dd5044050a0e7b20e2106de82973cec107f8d31ecfce93e36696c13fec8ab12: Status 404 returned error can't find the container with id 4dd5044050a0e7b20e2106de82973cec107f8d31ecfce93e36696c13fec8ab12 Jan 29 09:44:19 crc kubenswrapper[5031]: I0129 09:44:19.204100 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/must-gather-v6pbp" event={"ID":"a5df2e74-662a-4b66-9ccc-93c1eac717b8","Type":"ContainerStarted","Data":"4dd5044050a0e7b20e2106de82973cec107f8d31ecfce93e36696c13fec8ab12"} Jan 29 09:44:20 crc kubenswrapper[5031]: I0129 09:44:20.216341 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/must-gather-v6pbp" event={"ID":"a5df2e74-662a-4b66-9ccc-93c1eac717b8","Type":"ContainerStarted","Data":"565b4dff7315d5d2bfafc1898bd02658ee2d8af909f624c1542d988560ca7d8e"} Jan 29 09:44:20 crc kubenswrapper[5031]: I0129 09:44:20.217511 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/must-gather-v6pbp" event={"ID":"a5df2e74-662a-4b66-9ccc-93c1eac717b8","Type":"ContainerStarted","Data":"bff30e6ab4ffecde26e3426329ae52528becd093b17e1c62a935e2cbb389b346"} Jan 29 09:44:20 crc kubenswrapper[5031]: I0129 09:44:20.243642 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-76trw/must-gather-v6pbp" podStartSLOduration=2.243619934 podStartE2EDuration="2.243619934s" podCreationTimestamp="2026-01-29 09:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:44:20.232494916 +0000 UTC m=+3940.732082888" watchObservedRunningTime="2026-01-29 09:44:20.243619934 +0000 UTC m=+3940.743207886" Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.689909 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76trw/crc-debug-b2jpn"] Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.691880 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.749805 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxf5k\" (UniqueName: \"kubernetes.io/projected/3c2a3154-325e-49f7-95e3-023539d1f38b-kube-api-access-jxf5k\") pod \"crc-debug-b2jpn\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.749994 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c2a3154-325e-49f7-95e3-023539d1f38b-host\") pod \"crc-debug-b2jpn\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.852409 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxf5k\" (UniqueName: \"kubernetes.io/projected/3c2a3154-325e-49f7-95e3-023539d1f38b-kube-api-access-jxf5k\") pod \"crc-debug-b2jpn\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.852659 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c2a3154-325e-49f7-95e3-023539d1f38b-host\") pod \"crc-debug-b2jpn\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.852821 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c2a3154-325e-49f7-95e3-023539d1f38b-host\") pod \"crc-debug-b2jpn\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:23 crc kubenswrapper[5031]: I0129 09:44:23.895397 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxf5k\" (UniqueName: \"kubernetes.io/projected/3c2a3154-325e-49f7-95e3-023539d1f38b-kube-api-access-jxf5k\") pod \"crc-debug-b2jpn\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:24 crc kubenswrapper[5031]: I0129 09:44:24.014356 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:24 crc kubenswrapper[5031]: W0129 09:44:24.072502 5031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c2a3154_325e_49f7_95e3_023539d1f38b.slice/crio-3138e51522631328ca0bdb1593c7640aeecaf08b95099712e052bd9aecddeec5 WatchSource:0}: Error finding container 3138e51522631328ca0bdb1593c7640aeecaf08b95099712e052bd9aecddeec5: Status 404 returned error can't find the container with id 3138e51522631328ca0bdb1593c7640aeecaf08b95099712e052bd9aecddeec5 Jan 29 09:44:24 crc kubenswrapper[5031]: I0129 09:44:24.248021 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-b2jpn" event={"ID":"3c2a3154-325e-49f7-95e3-023539d1f38b","Type":"ContainerStarted","Data":"3138e51522631328ca0bdb1593c7640aeecaf08b95099712e052bd9aecddeec5"} Jan 29 09:44:25 crc kubenswrapper[5031]: I0129 09:44:25.260056 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-b2jpn" event={"ID":"3c2a3154-325e-49f7-95e3-023539d1f38b","Type":"ContainerStarted","Data":"0aeb8a1fed215721ddeb828a809a3d044631d57b6409d0c10d185ba8987b3010"} Jan 29 09:44:25 crc kubenswrapper[5031]: I0129 09:44:25.281933 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-76trw/crc-debug-b2jpn" podStartSLOduration=2.28190876 podStartE2EDuration="2.28190876s" podCreationTimestamp="2026-01-29 09:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:44:25.271842501 +0000 UTC m=+3945.771430463" watchObservedRunningTime="2026-01-29 09:44:25.28190876 +0000 UTC m=+3945.781496712" Jan 29 09:44:57 crc kubenswrapper[5031]: I0129 09:44:57.582011 5031 generic.go:334] "Generic (PLEG): container finished" podID="3c2a3154-325e-49f7-95e3-023539d1f38b" containerID="0aeb8a1fed215721ddeb828a809a3d044631d57b6409d0c10d185ba8987b3010" exitCode=0 Jan 29 09:44:57 crc kubenswrapper[5031]: I0129 09:44:57.582113 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-b2jpn" event={"ID":"3c2a3154-325e-49f7-95e3-023539d1f38b","Type":"ContainerDied","Data":"0aeb8a1fed215721ddeb828a809a3d044631d57b6409d0c10d185ba8987b3010"} Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.710983 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.747313 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76trw/crc-debug-b2jpn"] Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.757068 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76trw/crc-debug-b2jpn"] Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.904900 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxf5k\" (UniqueName: \"kubernetes.io/projected/3c2a3154-325e-49f7-95e3-023539d1f38b-kube-api-access-jxf5k\") pod \"3c2a3154-325e-49f7-95e3-023539d1f38b\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.905070 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c2a3154-325e-49f7-95e3-023539d1f38b-host\") pod \"3c2a3154-325e-49f7-95e3-023539d1f38b\" (UID: \"3c2a3154-325e-49f7-95e3-023539d1f38b\") " Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.905183 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2a3154-325e-49f7-95e3-023539d1f38b-host" (OuterVolumeSpecName: "host") pod "3c2a3154-325e-49f7-95e3-023539d1f38b" (UID: "3c2a3154-325e-49f7-95e3-023539d1f38b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.905662 5031 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c2a3154-325e-49f7-95e3-023539d1f38b-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:58 crc kubenswrapper[5031]: I0129 09:44:58.911235 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2a3154-325e-49f7-95e3-023539d1f38b-kube-api-access-jxf5k" (OuterVolumeSpecName: "kube-api-access-jxf5k") pod "3c2a3154-325e-49f7-95e3-023539d1f38b" (UID: "3c2a3154-325e-49f7-95e3-023539d1f38b"). InnerVolumeSpecName "kube-api-access-jxf5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:44:59 crc kubenswrapper[5031]: I0129 09:44:59.007340 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxf5k\" (UniqueName: \"kubernetes.io/projected/3c2a3154-325e-49f7-95e3-023539d1f38b-kube-api-access-jxf5k\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:59 crc kubenswrapper[5031]: I0129 09:44:59.617017 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3138e51522631328ca0bdb1593c7640aeecaf08b95099712e052bd9aecddeec5" Jan 29 09:44:59 crc kubenswrapper[5031]: I0129 09:44:59.617068 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-b2jpn" Jan 29 09:44:59 crc kubenswrapper[5031]: I0129 09:44:59.959998 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76trw/crc-debug-xdjcx"] Jan 29 09:44:59 crc kubenswrapper[5031]: E0129 09:44:59.960405 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2a3154-325e-49f7-95e3-023539d1f38b" containerName="container-00" Jan 29 09:44:59 crc kubenswrapper[5031]: I0129 09:44:59.960416 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2a3154-325e-49f7-95e3-023539d1f38b" containerName="container-00" Jan 29 09:44:59 crc kubenswrapper[5031]: I0129 09:44:59.960608 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2a3154-325e-49f7-95e3-023539d1f38b" containerName="container-00" Jan 29 09:44:59 crc kubenswrapper[5031]: I0129 09:44:59.961240 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.028129 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-host\") pod \"crc-debug-xdjcx\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.028243 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn9c7\" (UniqueName: \"kubernetes.io/projected/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-kube-api-access-hn9c7\") pod \"crc-debug-xdjcx\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.129607 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-host\") pod \"crc-debug-xdjcx\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.129980 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn9c7\" (UniqueName: \"kubernetes.io/projected/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-kube-api-access-hn9c7\") pod \"crc-debug-xdjcx\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.129687 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-host\") pod \"crc-debug-xdjcx\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.151857 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn9c7\" (UniqueName: \"kubernetes.io/projected/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-kube-api-access-hn9c7\") pod \"crc-debug-xdjcx\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.228777 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc"] Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.230180 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.233294 5031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.234988 5031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.265452 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc"] Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.276774 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.298973 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2a3154-325e-49f7-95e3-023539d1f38b" path="/var/lib/kubelet/pods/3c2a3154-325e-49f7-95e3-023539d1f38b/volumes" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.335600 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b44b464-6265-4ec8-b930-b22e64bc3bba-secret-volume\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.335723 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9wt8\" (UniqueName: \"kubernetes.io/projected/3b44b464-6265-4ec8-b930-b22e64bc3bba-kube-api-access-h9wt8\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.335771 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b44b464-6265-4ec8-b930-b22e64bc3bba-config-volume\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.437407 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b44b464-6265-4ec8-b930-b22e64bc3bba-secret-volume\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.437854 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9wt8\" (UniqueName: \"kubernetes.io/projected/3b44b464-6265-4ec8-b930-b22e64bc3bba-kube-api-access-h9wt8\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.437898 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b44b464-6265-4ec8-b930-b22e64bc3bba-config-volume\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.438949 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b44b464-6265-4ec8-b930-b22e64bc3bba-config-volume\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.443140 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b44b464-6265-4ec8-b930-b22e64bc3bba-secret-volume\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.458971 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9wt8\" (UniqueName: \"kubernetes.io/projected/3b44b464-6265-4ec8-b930-b22e64bc3bba-kube-api-access-h9wt8\") pod \"collect-profiles-29494665-g79hc\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.564147 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.633692 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-xdjcx" event={"ID":"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e","Type":"ContainerStarted","Data":"aedb484bfdd0276047937b71a2765709b9db33f9ed681d0f49773772be660aab"} Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.633769 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-xdjcx" event={"ID":"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e","Type":"ContainerStarted","Data":"a2b6a7cba6c1aa5be65813fc6e1813aa138bf488ae8a28eb42cbafecd4ea00aa"} Jan 29 09:45:00 crc kubenswrapper[5031]: I0129 09:45:00.690796 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-76trw/crc-debug-xdjcx" podStartSLOduration=1.690777478 podStartE2EDuration="1.690777478s" podCreationTimestamp="2026-01-29 09:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:45:00.677944974 +0000 UTC m=+3981.177532926" watchObservedRunningTime="2026-01-29 09:45:00.690777478 +0000 UTC m=+3981.190365430" Jan 29 09:45:01 crc kubenswrapper[5031]: I0129 09:45:01.049444 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc"] Jan 29 09:45:01 crc kubenswrapper[5031]: I0129 09:45:01.642610 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" event={"ID":"3b44b464-6265-4ec8-b930-b22e64bc3bba","Type":"ContainerStarted","Data":"edecbba254bc89be3aafd52c9f468ea4b1a0b18eb615d3582bb75c405fb4eac5"} Jan 29 09:45:01 crc kubenswrapper[5031]: I0129 09:45:01.642989 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" event={"ID":"3b44b464-6265-4ec8-b930-b22e64bc3bba","Type":"ContainerStarted","Data":"68b522aa8395c9b8b20144240b4ee4f1ec5dabd08d8d68da5e2f71e9511cdc24"} Jan 29 09:45:01 crc kubenswrapper[5031]: I0129 09:45:01.644689 5031 generic.go:334] "Generic (PLEG): container finished" podID="a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e" containerID="aedb484bfdd0276047937b71a2765709b9db33f9ed681d0f49773772be660aab" exitCode=0 Jan 29 09:45:01 crc kubenswrapper[5031]: I0129 09:45:01.644722 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-xdjcx" event={"ID":"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e","Type":"ContainerDied","Data":"aedb484bfdd0276047937b71a2765709b9db33f9ed681d0f49773772be660aab"} Jan 29 09:45:01 crc kubenswrapper[5031]: I0129 09:45:01.705235 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" podStartSLOduration=1.705213844 podStartE2EDuration="1.705213844s" podCreationTimestamp="2026-01-29 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:45:01.702477681 +0000 UTC m=+3982.202065633" watchObservedRunningTime="2026-01-29 09:45:01.705213844 +0000 UTC m=+3982.204801796" Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.656046 5031 generic.go:334] "Generic (PLEG): container finished" podID="3b44b464-6265-4ec8-b930-b22e64bc3bba" containerID="edecbba254bc89be3aafd52c9f468ea4b1a0b18eb615d3582bb75c405fb4eac5" exitCode=0 Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.656207 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" event={"ID":"3b44b464-6265-4ec8-b930-b22e64bc3bba","Type":"ContainerDied","Data":"edecbba254bc89be3aafd52c9f468ea4b1a0b18eb615d3582bb75c405fb4eac5"} Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.753912 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.828447 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76trw/crc-debug-xdjcx"] Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.839279 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76trw/crc-debug-xdjcx"] Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.886994 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-host\") pod \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.887091 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-host" (OuterVolumeSpecName: "host") pod "a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e" (UID: "a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.887455 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn9c7\" (UniqueName: \"kubernetes.io/projected/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-kube-api-access-hn9c7\") pod \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\" (UID: \"a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e\") " Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.889810 5031 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.893585 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-kube-api-access-hn9c7" (OuterVolumeSpecName: "kube-api-access-hn9c7") pod "a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e" (UID: "a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e"). InnerVolumeSpecName "kube-api-access-hn9c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:45:02 crc kubenswrapper[5031]: I0129 09:45:02.993416 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn9c7\" (UniqueName: \"kubernetes.io/projected/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e-kube-api-access-hn9c7\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:03 crc kubenswrapper[5031]: I0129 09:45:03.665827 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-xdjcx" Jan 29 09:45:03 crc kubenswrapper[5031]: I0129 09:45:03.667492 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2b6a7cba6c1aa5be65813fc6e1813aa138bf488ae8a28eb42cbafecd4ea00aa" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.073514 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76trw/crc-debug-4zzkl"] Jan 29 09:45:04 crc kubenswrapper[5031]: E0129 09:45:04.074204 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e" containerName="container-00" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.074220 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e" containerName="container-00" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.074420 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e" containerName="container-00" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.075163 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.140534 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.219890 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b44b464-6265-4ec8-b930-b22e64bc3bba-config-volume\") pod \"3b44b464-6265-4ec8-b930-b22e64bc3bba\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.220581 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9wt8\" (UniqueName: \"kubernetes.io/projected/3b44b464-6265-4ec8-b930-b22e64bc3bba-kube-api-access-h9wt8\") pod \"3b44b464-6265-4ec8-b930-b22e64bc3bba\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.220638 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b44b464-6265-4ec8-b930-b22e64bc3bba-secret-volume\") pod \"3b44b464-6265-4ec8-b930-b22e64bc3bba\" (UID: \"3b44b464-6265-4ec8-b930-b22e64bc3bba\") " Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.221167 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa3eb66f-3899-4806-8c91-87f077a677d1-host\") pod \"crc-debug-4zzkl\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.221425 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvhp\" (UniqueName: \"kubernetes.io/projected/aa3eb66f-3899-4806-8c91-87f077a677d1-kube-api-access-npvhp\") pod \"crc-debug-4zzkl\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.221454 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b44b464-6265-4ec8-b930-b22e64bc3bba-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b44b464-6265-4ec8-b930-b22e64bc3bba" (UID: "3b44b464-6265-4ec8-b930-b22e64bc3bba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.221570 5031 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b44b464-6265-4ec8-b930-b22e64bc3bba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.230698 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b44b464-6265-4ec8-b930-b22e64bc3bba-kube-api-access-h9wt8" (OuterVolumeSpecName: "kube-api-access-h9wt8") pod "3b44b464-6265-4ec8-b930-b22e64bc3bba" (UID: "3b44b464-6265-4ec8-b930-b22e64bc3bba"). InnerVolumeSpecName "kube-api-access-h9wt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.247157 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b44b464-6265-4ec8-b930-b22e64bc3bba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b44b464-6265-4ec8-b930-b22e64bc3bba" (UID: "3b44b464-6265-4ec8-b930-b22e64bc3bba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.297146 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e" path="/var/lib/kubelet/pods/a5c97f3f-17a4-4ae2-8b50-c64e4dd4127e/volumes" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.323663 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa3eb66f-3899-4806-8c91-87f077a677d1-host\") pod \"crc-debug-4zzkl\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.323771 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npvhp\" (UniqueName: \"kubernetes.io/projected/aa3eb66f-3899-4806-8c91-87f077a677d1-kube-api-access-npvhp\") pod \"crc-debug-4zzkl\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.323821 5031 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b44b464-6265-4ec8-b930-b22e64bc3bba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.323831 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9wt8\" (UniqueName: \"kubernetes.io/projected/3b44b464-6265-4ec8-b930-b22e64bc3bba-kube-api-access-h9wt8\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.324105 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa3eb66f-3899-4806-8c91-87f077a677d1-host\") pod \"crc-debug-4zzkl\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.340669 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npvhp\" (UniqueName: \"kubernetes.io/projected/aa3eb66f-3899-4806-8c91-87f077a677d1-kube-api-access-npvhp\") pod \"crc-debug-4zzkl\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.454806 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.677096 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-4zzkl" event={"ID":"aa3eb66f-3899-4806-8c91-87f077a677d1","Type":"ContainerStarted","Data":"d4fd99bf99a332dec930fb3970eef14593b4d04a033f9a2195d4768ac0442ab3"} Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.719649 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" event={"ID":"3b44b464-6265-4ec8-b930-b22e64bc3bba","Type":"ContainerDied","Data":"68b522aa8395c9b8b20144240b4ee4f1ec5dabd08d8d68da5e2f71e9511cdc24"} Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.719886 5031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68b522aa8395c9b8b20144240b4ee4f1ec5dabd08d8d68da5e2f71e9511cdc24" Jan 29 09:45:04 crc kubenswrapper[5031]: I0129 09:45:04.719731 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-g79hc" Jan 29 09:45:05 crc kubenswrapper[5031]: I0129 09:45:05.224642 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg"] Jan 29 09:45:05 crc kubenswrapper[5031]: I0129 09:45:05.238378 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p8mlg"] Jan 29 09:45:05 crc kubenswrapper[5031]: I0129 09:45:05.730148 5031 generic.go:334] "Generic (PLEG): container finished" podID="aa3eb66f-3899-4806-8c91-87f077a677d1" containerID="9455d9949df1e3101549f2fbeac94897351286b47618126c71f66321e9c8947a" exitCode=0 Jan 29 09:45:05 crc kubenswrapper[5031]: I0129 09:45:05.730190 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/crc-debug-4zzkl" event={"ID":"aa3eb66f-3899-4806-8c91-87f077a677d1","Type":"ContainerDied","Data":"9455d9949df1e3101549f2fbeac94897351286b47618126c71f66321e9c8947a"} Jan 29 09:45:05 crc kubenswrapper[5031]: I0129 09:45:05.781436 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76trw/crc-debug-4zzkl"] Jan 29 09:45:05 crc kubenswrapper[5031]: I0129 09:45:05.792726 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76trw/crc-debug-4zzkl"] Jan 29 09:45:06 crc kubenswrapper[5031]: I0129 09:45:06.295918 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577548b3-0ae4-42be-b7bf-a8a79788186e" path="/var/lib/kubelet/pods/577548b3-0ae4-42be-b7bf-a8a79788186e/volumes" Jan 29 09:45:06 crc kubenswrapper[5031]: I0129 09:45:06.834748 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:06 crc kubenswrapper[5031]: I0129 09:45:06.983221 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa3eb66f-3899-4806-8c91-87f077a677d1-host\") pod \"aa3eb66f-3899-4806-8c91-87f077a677d1\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " Jan 29 09:45:06 crc kubenswrapper[5031]: I0129 09:45:06.983398 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npvhp\" (UniqueName: \"kubernetes.io/projected/aa3eb66f-3899-4806-8c91-87f077a677d1-kube-api-access-npvhp\") pod \"aa3eb66f-3899-4806-8c91-87f077a677d1\" (UID: \"aa3eb66f-3899-4806-8c91-87f077a677d1\") " Jan 29 09:45:06 crc kubenswrapper[5031]: I0129 09:45:06.984645 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa3eb66f-3899-4806-8c91-87f077a677d1-host" (OuterVolumeSpecName: "host") pod "aa3eb66f-3899-4806-8c91-87f077a677d1" (UID: "aa3eb66f-3899-4806-8c91-87f077a677d1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:45:07 crc kubenswrapper[5031]: I0129 09:45:07.005140 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3eb66f-3899-4806-8c91-87f077a677d1-kube-api-access-npvhp" (OuterVolumeSpecName: "kube-api-access-npvhp") pod "aa3eb66f-3899-4806-8c91-87f077a677d1" (UID: "aa3eb66f-3899-4806-8c91-87f077a677d1"). InnerVolumeSpecName "kube-api-access-npvhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:45:07 crc kubenswrapper[5031]: I0129 09:45:07.085516 5031 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa3eb66f-3899-4806-8c91-87f077a677d1-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:07 crc kubenswrapper[5031]: I0129 09:45:07.085561 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npvhp\" (UniqueName: \"kubernetes.io/projected/aa3eb66f-3899-4806-8c91-87f077a677d1-kube-api-access-npvhp\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:07 crc kubenswrapper[5031]: I0129 09:45:07.748921 5031 scope.go:117] "RemoveContainer" containerID="9455d9949df1e3101549f2fbeac94897351286b47618126c71f66321e9c8947a" Jan 29 09:45:07 crc kubenswrapper[5031]: I0129 09:45:07.749156 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/crc-debug-4zzkl" Jan 29 09:45:08 crc kubenswrapper[5031]: I0129 09:45:08.292351 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3eb66f-3899-4806-8c91-87f077a677d1" path="/var/lib/kubelet/pods/aa3eb66f-3899-4806-8c91-87f077a677d1/volumes" Jan 29 09:45:11 crc kubenswrapper[5031]: I0129 09:45:11.108929 5031 scope.go:117] "RemoveContainer" containerID="44ce92c733d26f6b44ed27cbae097d03f6bd51bf66637bd2448bdeaecda730a0" Jan 29 09:45:54 crc kubenswrapper[5031]: I0129 09:45:54.691052 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f47855b9d-vl7rl_f5d945c8-336c-4683-8e04-2dd0de48b0ee/barbican-api/0.log" Jan 29 09:45:54 crc kubenswrapper[5031]: I0129 09:45:54.894996 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f47855b9d-vl7rl_f5d945c8-336c-4683-8e04-2dd0de48b0ee/barbican-api-log/0.log" Jan 29 09:45:54 crc kubenswrapper[5031]: I0129 09:45:54.991822 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-685b68c5cb-gfkqk_74ae4456-e53d-410e-931c-108d9b79177f/barbican-keystone-listener/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.058242 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-685b68c5cb-gfkqk_74ae4456-e53d-410e-931c-108d9b79177f/barbican-keystone-listener-log/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.171714 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86875b9f7-r8mj8_2769fca4-758e-4f92-a514-a70ca7cb0b5a/barbican-worker/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.193285 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86875b9f7-r8mj8_2769fca4-758e-4f92-a514-a70ca7cb0b5a/barbican-worker-log/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.400439 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x4fqd_91b928d8-c43f-4fa6-b673-62b42f2c88a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.447933 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/ceilometer-central-agent/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.505134 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/ceilometer-notification-agent/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.597064 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/proxy-httpd/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.628040 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8949618-20d4-4cd9-8b4b-6abcf3684676/sg-core/0.log" Jan 29 09:45:55 crc kubenswrapper[5031]: I0129 09:45:55.660085 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-pw5ld_95c8c7b7-5003-4dae-b405-74dc2263762c/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.002689 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-s7q6v_fc3178c8-27cc-4f8e-a913-6eae9c84da49/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.134199 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2c053401-8bfa-4629-926e-e97653fbb397/cinder-api/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.221649 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2c053401-8bfa-4629-926e-e97653fbb397/cinder-api-log/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.349774 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ec4354fa-4aef-4401-befd-f3a59619869e/probe/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.581563 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ce55669-5a60-4cbb-8994-441b7c5d0c75/cinder-scheduler/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.602768 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ce55669-5a60-4cbb-8994-441b7c5d0c75/probe/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.631711 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ec4354fa-4aef-4401-befd-f3a59619869e/cinder-backup/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.798527 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1fae57c0-f6a0-4239-b513-e37aec4f4065/probe/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.917873 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1fae57c0-f6a0-4239-b513-e37aec4f4065/cinder-volume/0.log" Jan 29 09:45:56 crc kubenswrapper[5031]: I0129 09:45:56.942150 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-h9b65_c9397ed4-a4ea-45be-9115-657795050184/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.095245 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kmhj7_1c21c7ac-919e-43f0-92b2-0cf64df94743/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.197566 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ptpjh_3b0d7949-564d-4b3d-84f8-038fc952a24f/init/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.362658 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ptpjh_3b0d7949-564d-4b3d-84f8-038fc952a24f/init/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.449068 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e631cdf5-7a95-457f-95ac-8632231e0cd7/glance-httpd/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.455908 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ptpjh_3b0d7949-564d-4b3d-84f8-038fc952a24f/dnsmasq-dns/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.629075 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e631cdf5-7a95-457f-95ac-8632231e0cd7/glance-log/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.652462 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4e136d48-7be7-4b0f-a45c-da6b3d218b8d/glance-httpd/0.log" Jan 29 09:45:57 crc kubenswrapper[5031]: I0129 09:45:57.767264 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4e136d48-7be7-4b0f-a45c-da6b3d218b8d/glance-log/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.012502 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b47759886-4vh7j_7cfc507f-5595-4ff5-9f5f-8942dc5468dc/horizon/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.090132 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b47759886-4vh7j_7cfc507f-5595-4ff5-9f5f-8942dc5468dc/horizon-log/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.126611 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bm2z4_49194734-e76b-4b96-bf9c-a4a73782e04b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.224620 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-tc282_83ca1366-5060-4771-ae03-b06595c0d5fb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.443615 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6b6fcb467b-dc5s8_11cb22e9-f3f2-4a42-804c-aaa47ca31a16/keystone-api/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.500854 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494621-vw7kq_d3ee4f52-58c1-4e47-b074-1f2a379b5eb2/keystone-cron/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.700198 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c58acf2e-dae4-45a7-b98e-ef0d3fe1a59d/kube-state-metrics/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.771585 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-7z526_4ae4a3e5-86e4-4702-a0da-9ee29ee6e8cc/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:58 crc kubenswrapper[5031]: I0129 09:45:58.929699 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2ce35ae9-25db-409d-af6b-0f5d94e61ea7/manila-api-log/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.000109 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4320ea6-3657-454b-b535-3776f405d823/probe/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.029005 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2ce35ae9-25db-409d-af6b-0f5d94e61ea7/manila-api/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.147721 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d4320ea6-3657-454b-b535-3776f405d823/manila-scheduler/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.223962 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1ae94363-9689-48ed-8c8d-c1668fb5955a/probe/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.255418 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1ae94363-9689-48ed-8c8d-c1668fb5955a/manila-share/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.516775 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-558dccb5cc-bkkrn_8b30d63e-6219-4832-868b-9a115b30f433/neutron-httpd/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.535464 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-558dccb5cc-bkkrn_8b30d63e-6219-4832-868b-9a115b30f433/neutron-api/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.687149 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r49p2_5e820097-42d1-47ac-84d1-824842f92b8b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:45:59 crc kubenswrapper[5031]: I0129 09:45:59.992767 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_92d916f0-bb3a-45de-b176-616bd8a170e4/nova-api-log/0.log" Jan 29 09:46:00 crc kubenswrapper[5031]: I0129 09:46:00.236008 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_49fa8048-1d04-42bc-8e37-b6b40e7e5ece/nova-cell0-conductor-conductor/0.log" Jan 29 09:46:00 crc kubenswrapper[5031]: I0129 09:46:00.337142 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_92d916f0-bb3a-45de-b176-616bd8a170e4/nova-api-api/0.log" Jan 29 09:46:00 crc kubenswrapper[5031]: I0129 09:46:00.475853 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7411c3e7-5370-4bc2-85b8-aa1a137d948b/memcached/0.log" Jan 29 09:46:00 crc kubenswrapper[5031]: I0129 09:46:00.576589 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_fd945c64-5938-4cc6-9eb5-17e013e36aba/nova-cell1-conductor-conductor/0.log" Jan 29 09:46:00 crc kubenswrapper[5031]: I0129 09:46:00.694796 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_15c7d35a-0f80-4823-8d8d-371e1f76f869/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 09:46:00 crc kubenswrapper[5031]: I0129 09:46:00.745224 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fm4ts_05fc07ec-828a-468d-be87-1fe3925dfb0c/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:46:00 crc kubenswrapper[5031]: I0129 09:46:00.863809 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_97911fdf-2136-4700-8474-d165d6de4c33/nova-metadata-log/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.189668 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7149ef7-171a-48eb-a13a-af1982b4fbb1/mysql-bootstrap/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.228818 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_57931a94-e323-4a04-915d-735dc7a09030/nova-scheduler-scheduler/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.342443 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7149ef7-171a-48eb-a13a-af1982b4fbb1/mysql-bootstrap/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.360630 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7149ef7-171a-48eb-a13a-af1982b4fbb1/galera/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.508066 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_33700928-aca8-42c5-83f7-a57572d399aa/mysql-bootstrap/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.686120 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_33700928-aca8-42c5-83f7-a57572d399aa/mysql-bootstrap/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.761915 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_33700928-aca8-42c5-83f7-a57572d399aa/galera/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.763205 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7cd1d91b-5c5a-425c-bb48-ed97702719d6/openstackclient/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.912270 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_97911fdf-2136-4700-8474-d165d6de4c33/nova-metadata-metadata/0.log" Jan 29 09:46:01 crc kubenswrapper[5031]: I0129 09:46:01.999587 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-khdxz_8e57b4c5-5c87-4720-9586-c4e7a8cf763f/openstack-network-exporter/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.002591 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovsdb-server-init/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.209393 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovsdb-server/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.217775 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovsdb-server-init/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.234311 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-z6mp7_b34fd049-3d7e-4d5d-acfc-8e4c450bf857/ovn-controller/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.271407 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lmq4s_d10ff314-d9a8-43bc-a0ad-c821e181b328/ovs-vswitchd/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.408916 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kdq49_764d97ce-43f8-4cce-9b06-61f1a548199f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.454322 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2f3941fd-64d1-4652-83b1-e89d547e4df5/openstack-network-exporter/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.489546 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2f3941fd-64d1-4652-83b1-e89d547e4df5/ovn-northd/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.622408 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_11c52100-0b09-4377-b50e-84c78d3ddf74/openstack-network-exporter/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.675509 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_11c52100-0b09-4377-b50e-84c78d3ddf74/ovsdbserver-nb/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.740959 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0ad1ce96-1373-407b-b4ec-700934ef6ac4/openstack-network-exporter/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.808016 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0ad1ce96-1373-407b-b4ec-700934ef6ac4/ovsdbserver-sb/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.966888 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c4fdc6744-xx4wj_e009c8bd-2d71-405b-a166-53cf1451c8f0/placement-api/0.log" Jan 29 09:46:02 crc kubenswrapper[5031]: I0129 09:46:02.978042 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c4fdc6744-xx4wj_e009c8bd-2d71-405b-a166-53cf1451c8f0/placement-log/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.020998 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3af83c61-d4e1-4694-a820-1bb5529a2bce/setup-container/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.260218 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3af83c61-d4e1-4694-a820-1bb5529a2bce/rabbitmq/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.266042 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73/setup-container/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.303210 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3af83c61-d4e1-4694-a820-1bb5529a2bce/setup-container/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.478557 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73/setup-container/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.526873 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-8lg77_5a33f933-f687-47f9-868b-02c0a633ab0f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.534799 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6f8f5da2-26c3-4ff4-a3fa-03cffb1bcd73/rabbitmq/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.652580 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mcmhb_b62042d2-d6ae-42b6-abaa-b08bdb66257d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.730292 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7pppc_7a27e64c-0c6a-497f-bdae-50302a72b898/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.822605 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6t4cp_8c91cd46-761e-4015-a2ea-90647c5a7be5/ssh-known-hosts-edpm-deployment/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.950609 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9aaab885-ceb7-4fa0-bfe5-87da9d8bb76e/tempest-tests-tempest-tests-runner/0.log" Jan 29 09:46:03 crc kubenswrapper[5031]: I0129 09:46:03.998932 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a5239287-c272-4c5b-b72b-c6fd55567ae8/test-operator-logs-container/0.log" Jan 29 09:46:04 crc kubenswrapper[5031]: I0129 09:46:04.135545 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-f9xd8_71b0cf3a-c1a8-48f9-bfc1-709c8ff03f41/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 09:46:29 crc kubenswrapper[5031]: I0129 09:46:29.691223 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/util/0.log" Jan 29 09:46:29 crc kubenswrapper[5031]: I0129 09:46:29.902468 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/util/0.log" Jan 29 09:46:29 crc kubenswrapper[5031]: I0129 09:46:29.930818 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/pull/0.log" Jan 29 09:46:29 crc kubenswrapper[5031]: I0129 09:46:29.938315 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/pull/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.125661 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/util/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.129249 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/pull/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.153458 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e37848cef62f02355e9b0b92c2b1877ee1abcd9ccc98aab6880e43e7ftf959_fa518afd-4138-4e05-9b66-939dc9fea8d1/extract/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.441512 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6bc7f4f4cf-6pqwq_9d7a2eca-248d-464e-b698-5f4daee374d3/manager/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.490586 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-f6487bd57-mppwm_a1850026-d710-4da7-883b-1b7149900523/manager/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.631982 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66dfbd6f5d-f5hc7_59d726a8-dfae-47c6-a479-682b32601f3b/manager/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.843738 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7857f788f-x5hq5_6b581b93-53b8-4bda-a3bc-7ab837f7aec3/manager/0.log" Jan 29 09:46:30 crc kubenswrapper[5031]: I0129 09:46:30.892495 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-587c6bfdcf-tt4jw_fef04ed6-9416-4599-a960-cde56635da29/manager/0.log" Jan 29 09:46:31 crc kubenswrapper[5031]: I0129 09:46:31.115905 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-ftmh8_911c19b6-72d1-4363-bae0-02bb5290a0c3/manager/0.log" Jan 29 09:46:31 crc kubenswrapper[5031]: I0129 09:46:31.346122 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-958664b5-tpj2j_7771acfe-a081-49f6-afa7-79c7436486b4/manager/0.log" Jan 29 09:46:31 crc kubenswrapper[5031]: I0129 09:46:31.354834 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-8dpt8_5b5b3ff2-7c9d-412e-8eef-a203c3096694/manager/0.log" Jan 29 09:46:31 crc kubenswrapper[5031]: I0129 09:46:31.506968 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6978b79747-zhkh2_8a42f832-5088-4110-a8a9-cc3203ea4677/manager/0.log" Jan 29 09:46:31 crc kubenswrapper[5031]: I0129 09:46:31.639749 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-765668569f-9nxrk_3828c08a-7f8d-4d56-8aad-9fb6a7ce294a/manager/0.log" Jan 29 09:46:31 crc kubenswrapper[5031]: I0129 09:46:31.764584 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-r6hlv_b0b4b733-caa0-46a2-854a-0a96d676fe86/manager/0.log" Jan 29 09:46:31 crc kubenswrapper[5031]: I0129 09:46:31.907673 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-694c5bfc85-ltbs2_4f4ae2ca-84cd-4445-a5c6-b1ee75dc81b6/manager/0.log" Jan 29 09:46:32 crc kubenswrapper[5031]: I0129 09:46:32.055615 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-hhbpv_b7af41a8-c82f-4e03-b775-ad36d931b8c5/manager/0.log" Jan 29 09:46:32 crc kubenswrapper[5031]: I0129 09:46:32.149587 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-b6c99d9c5-pppjk_652f139c-6f12-42e1-88e8-fef00b383015/manager/0.log" Jan 29 09:46:32 crc kubenswrapper[5031]: I0129 09:46:32.348544 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtgcsp_5925efab-b140-47f9-9b05-309973965161/manager/0.log" Jan 29 09:46:32 crc kubenswrapper[5031]: I0129 09:46:32.553205 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-694c86d6f5-8tvx7_9d3b6973-ffdd-445f-b03f-3783ff2c3159/operator/0.log" Jan 29 09:46:32 crc kubenswrapper[5031]: I0129 09:46:32.810612 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-znw6z_d18ce80b-f96c-41a4-80b5-fe959665c78a/registry-server/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.075657 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-fn2tc_6046088f-7960-4675-a8a6-06eb441cea9f/manager/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.078564 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-6hd46_b8416e4f-a2ee-46c8-90ff-2ed68301825e/manager/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.294871 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rwmm7_c3b8b573-36e5-48c9-bfb5-adff7608c393/operator/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.310245 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-46js4_3fb6584b-e21d-4c41-af40-6099ceda26fe/manager/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.747465 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7fd9db8655-wjbcx_bacd8bd3-412c-435e-b71d-e43f39daba5d/manager/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.798572 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-684f4d697d-h5vhw_f2eaf23b-b589-4c35-bb14-28a1aa1d9099/manager/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.838852 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-tgkd9_418034d3-f759-4efa-930f-c66f10db0fe2/manager/0.log" Jan 29 09:46:33 crc kubenswrapper[5031]: I0129 09:46:33.992766 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-vt2wm_4e1db845-0d5b-489a-b3bf-a2921dc81cdb/manager/0.log" Jan 29 09:46:38 crc kubenswrapper[5031]: I0129 09:46:38.493743 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:46:38 crc kubenswrapper[5031]: I0129 09:46:38.494275 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:46:54 crc kubenswrapper[5031]: I0129 09:46:54.788192 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kn9ds_66c6d48a-bdee-4f5b-b0ca-da05372e1ba2/control-plane-machine-set-operator/0.log" Jan 29 09:46:54 crc kubenswrapper[5031]: I0129 09:46:54.973013 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-w2sql_8a3bbd5e-4071-4761-b455-e830e12dfa81/kube-rbac-proxy/0.log" Jan 29 09:46:55 crc kubenswrapper[5031]: I0129 09:46:55.013616 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-w2sql_8a3bbd5e-4071-4761-b455-e830e12dfa81/machine-api-operator/0.log" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.184046 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-psh4z"] Jan 29 09:47:04 crc kubenswrapper[5031]: E0129 09:47:04.185566 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3eb66f-3899-4806-8c91-87f077a677d1" containerName="container-00" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.185581 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3eb66f-3899-4806-8c91-87f077a677d1" containerName="container-00" Jan 29 09:47:04 crc kubenswrapper[5031]: E0129 09:47:04.185592 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b44b464-6265-4ec8-b930-b22e64bc3bba" containerName="collect-profiles" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.185598 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b44b464-6265-4ec8-b930-b22e64bc3bba" containerName="collect-profiles" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.185776 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3eb66f-3899-4806-8c91-87f077a677d1" containerName="container-00" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.185787 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b44b464-6265-4ec8-b930-b22e64bc3bba" containerName="collect-profiles" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.187122 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.204342 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psh4z"] Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.227763 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-utilities\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.228186 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttx7\" (UniqueName: \"kubernetes.io/projected/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-kube-api-access-cttx7\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.228243 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-catalog-content\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.329862 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-utilities\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.329976 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cttx7\" (UniqueName: \"kubernetes.io/projected/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-kube-api-access-cttx7\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.330038 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-catalog-content\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.331233 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-utilities\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.331561 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-catalog-content\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.351833 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cttx7\" (UniqueName: \"kubernetes.io/projected/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-kube-api-access-cttx7\") pod \"certified-operators-psh4z\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:04 crc kubenswrapper[5031]: I0129 09:47:04.526796 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:05 crc kubenswrapper[5031]: I0129 09:47:05.063022 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psh4z"] Jan 29 09:47:05 crc kubenswrapper[5031]: I0129 09:47:05.757439 5031 generic.go:334] "Generic (PLEG): container finished" podID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerID="37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7" exitCode=0 Jan 29 09:47:05 crc kubenswrapper[5031]: I0129 09:47:05.757536 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psh4z" event={"ID":"4cf93a46-21ba-4fc3-9154-44ddb6f8f864","Type":"ContainerDied","Data":"37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7"} Jan 29 09:47:05 crc kubenswrapper[5031]: I0129 09:47:05.759130 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psh4z" event={"ID":"4cf93a46-21ba-4fc3-9154-44ddb6f8f864","Type":"ContainerStarted","Data":"485e0767d93abd50a0eb730814a735ed1d46658571144605ad0c12d5794ec984"} Jan 29 09:47:07 crc kubenswrapper[5031]: I0129 09:47:07.779430 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psh4z" event={"ID":"4cf93a46-21ba-4fc3-9154-44ddb6f8f864","Type":"ContainerStarted","Data":"7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b"} Jan 29 09:47:08 crc kubenswrapper[5031]: I0129 09:47:08.493598 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:47:08 crc kubenswrapper[5031]: I0129 09:47:08.493663 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:47:08 crc kubenswrapper[5031]: I0129 09:47:08.633693 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hfrt9_18d66dd7-f94a-41fd-9d04-f09c1cea0e58/cert-manager-controller/0.log" Jan 29 09:47:08 crc kubenswrapper[5031]: I0129 09:47:08.789539 5031 generic.go:334] "Generic (PLEG): container finished" podID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerID="7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b" exitCode=0 Jan 29 09:47:08 crc kubenswrapper[5031]: I0129 09:47:08.789590 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psh4z" event={"ID":"4cf93a46-21ba-4fc3-9154-44ddb6f8f864","Type":"ContainerDied","Data":"7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b"} Jan 29 09:47:08 crc kubenswrapper[5031]: I0129 09:47:08.827993 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-l47tb_f62b13b3-ff83-4f97-a291-8067c9f5cdc9/cert-manager-cainjector/0.log" Jan 29 09:47:08 crc kubenswrapper[5031]: I0129 09:47:08.899756 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ff66k_8983adca-9e9f-4d65-9ae5-091fa81877a0/cert-manager-webhook/0.log" Jan 29 09:47:09 crc kubenswrapper[5031]: I0129 09:47:09.799786 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psh4z" event={"ID":"4cf93a46-21ba-4fc3-9154-44ddb6f8f864","Type":"ContainerStarted","Data":"4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03"} Jan 29 09:47:09 crc kubenswrapper[5031]: I0129 09:47:09.825705 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-psh4z" podStartSLOduration=2.339448373 podStartE2EDuration="5.825688474s" podCreationTimestamp="2026-01-29 09:47:04 +0000 UTC" firstStartedPulling="2026-01-29 09:47:05.759760311 +0000 UTC m=+4106.259348263" lastFinishedPulling="2026-01-29 09:47:09.246000392 +0000 UTC m=+4109.745588364" observedRunningTime="2026-01-29 09:47:09.82294373 +0000 UTC m=+4110.322531702" watchObservedRunningTime="2026-01-29 09:47:09.825688474 +0000 UTC m=+4110.325276426" Jan 29 09:47:14 crc kubenswrapper[5031]: I0129 09:47:14.527448 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:14 crc kubenswrapper[5031]: I0129 09:47:14.527891 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:14 crc kubenswrapper[5031]: I0129 09:47:14.592425 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:14 crc kubenswrapper[5031]: I0129 09:47:14.883017 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:14 crc kubenswrapper[5031]: I0129 09:47:14.937474 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psh4z"] Jan 29 09:47:16 crc kubenswrapper[5031]: I0129 09:47:16.859343 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-psh4z" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="registry-server" containerID="cri-o://4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03" gracePeriod=2 Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.273228 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.388971 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-catalog-content\") pod \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.389088 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cttx7\" (UniqueName: \"kubernetes.io/projected/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-kube-api-access-cttx7\") pod \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.389217 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-utilities\") pod \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\" (UID: \"4cf93a46-21ba-4fc3-9154-44ddb6f8f864\") " Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.390245 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-utilities" (OuterVolumeSpecName: "utilities") pod "4cf93a46-21ba-4fc3-9154-44ddb6f8f864" (UID: "4cf93a46-21ba-4fc3-9154-44ddb6f8f864"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.395712 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-kube-api-access-cttx7" (OuterVolumeSpecName: "kube-api-access-cttx7") pod "4cf93a46-21ba-4fc3-9154-44ddb6f8f864" (UID: "4cf93a46-21ba-4fc3-9154-44ddb6f8f864"). InnerVolumeSpecName "kube-api-access-cttx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.445302 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cf93a46-21ba-4fc3-9154-44ddb6f8f864" (UID: "4cf93a46-21ba-4fc3-9154-44ddb6f8f864"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.494426 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.494458 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.494470 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cttx7\" (UniqueName: \"kubernetes.io/projected/4cf93a46-21ba-4fc3-9154-44ddb6f8f864-kube-api-access-cttx7\") on node \"crc\" DevicePath \"\"" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.869678 5031 generic.go:334] "Generic (PLEG): container finished" podID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerID="4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03" exitCode=0 Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.869777 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psh4z" event={"ID":"4cf93a46-21ba-4fc3-9154-44ddb6f8f864","Type":"ContainerDied","Data":"4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03"} Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.869878 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psh4z" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.870080 5031 scope.go:117] "RemoveContainer" containerID="4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.870050 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psh4z" event={"ID":"4cf93a46-21ba-4fc3-9154-44ddb6f8f864","Type":"ContainerDied","Data":"485e0767d93abd50a0eb730814a735ed1d46658571144605ad0c12d5794ec984"} Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.915575 5031 scope.go:117] "RemoveContainer" containerID="7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b" Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.938128 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psh4z"] Jan 29 09:47:17 crc kubenswrapper[5031]: I0129 09:47:17.952467 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-psh4z"] Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.073054 5031 scope.go:117] "RemoveContainer" containerID="37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7" Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.210157 5031 scope.go:117] "RemoveContainer" containerID="4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03" Jan 29 09:47:18 crc kubenswrapper[5031]: E0129 09:47:18.210661 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03\": container with ID starting with 4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03 not found: ID does not exist" containerID="4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03" Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.210710 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03"} err="failed to get container status \"4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03\": rpc error: code = NotFound desc = could not find container \"4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03\": container with ID starting with 4ceada32b203c1aa354d4d9deea6d9de09e47e78e16537bebe20cbba9da76d03 not found: ID does not exist" Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.210735 5031 scope.go:117] "RemoveContainer" containerID="7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b" Jan 29 09:47:18 crc kubenswrapper[5031]: E0129 09:47:18.211212 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b\": container with ID starting with 7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b not found: ID does not exist" containerID="7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b" Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.211260 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b"} err="failed to get container status \"7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b\": rpc error: code = NotFound desc = could not find container \"7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b\": container with ID starting with 7062a23bdac15ca55d88ed8c8b3fce0b130cea76f6c3824890c1061aa1292d8b not found: ID does not exist" Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.211299 5031 scope.go:117] "RemoveContainer" containerID="37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7" Jan 29 09:47:18 crc kubenswrapper[5031]: E0129 09:47:18.211651 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7\": container with ID starting with 37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7 not found: ID does not exist" containerID="37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7" Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.211678 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7"} err="failed to get container status \"37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7\": rpc error: code = NotFound desc = could not find container \"37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7\": container with ID starting with 37cc7ebb845709f94baedb0f2ea60763a5f511dc71da4a0ec0f6bd8f123eebb7 not found: ID does not exist" Jan 29 09:47:18 crc kubenswrapper[5031]: I0129 09:47:18.293039 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" path="/var/lib/kubelet/pods/4cf93a46-21ba-4fc3-9154-44ddb6f8f864/volumes" Jan 29 09:47:21 crc kubenswrapper[5031]: I0129 09:47:21.783230 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gcrhb_5c55f203-c18f-402b-a766-a1f291a5b3dc/nmstate-console-plugin/0.log" Jan 29 09:47:21 crc kubenswrapper[5031]: I0129 09:47:21.936485 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wzjdc_21eadbd2-15f3-47aa-8428-fb22325e29a6/nmstate-handler/0.log" Jan 29 09:47:21 crc kubenswrapper[5031]: I0129 09:47:21.949591 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-w2269_27616237-18b5-463e-be46-59392bbff884/kube-rbac-proxy/0.log" Jan 29 09:47:22 crc kubenswrapper[5031]: I0129 09:47:22.005141 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-w2269_27616237-18b5-463e-be46-59392bbff884/nmstate-metrics/0.log" Jan 29 09:47:22 crc kubenswrapper[5031]: I0129 09:47:22.139913 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-vdbl7_1e390d20-964f-4337-a396-d56cf85b5a4d/nmstate-operator/0.log" Jan 29 09:47:22 crc kubenswrapper[5031]: I0129 09:47:22.196106 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-scf9x_2a6126a5-5e52-418a-ba32-ce426e8ce58c/nmstate-webhook/0.log" Jan 29 09:47:38 crc kubenswrapper[5031]: I0129 09:47:38.494094 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:47:38 crc kubenswrapper[5031]: I0129 09:47:38.494651 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:47:38 crc kubenswrapper[5031]: I0129 09:47:38.494690 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:47:38 crc kubenswrapper[5031]: I0129 09:47:38.495406 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25d3c4dfc92bf39011e601e057af1e68b30d01be5281c5cf5375ff05644ea177"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:47:38 crc kubenswrapper[5031]: I0129 09:47:38.495460 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://25d3c4dfc92bf39011e601e057af1e68b30d01be5281c5cf5375ff05644ea177" gracePeriod=600 Jan 29 09:47:39 crc kubenswrapper[5031]: I0129 09:47:39.050300 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="25d3c4dfc92bf39011e601e057af1e68b30d01be5281c5cf5375ff05644ea177" exitCode=0 Jan 29 09:47:39 crc kubenswrapper[5031]: I0129 09:47:39.050338 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"25d3c4dfc92bf39011e601e057af1e68b30d01be5281c5cf5375ff05644ea177"} Jan 29 09:47:39 crc kubenswrapper[5031]: I0129 09:47:39.050683 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerStarted","Data":"fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050"} Jan 29 09:47:39 crc kubenswrapper[5031]: I0129 09:47:39.050706 5031 scope.go:117] "RemoveContainer" containerID="1bfcbd5cee0afc8f78c66baaf15e9037a309c36e0a69b7af4ef44e06904db5d3" Jan 29 09:47:49 crc kubenswrapper[5031]: I0129 09:47:49.775496 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2ls2g_d0fae1e4-5509-482f-9430-17a7148dc235/kube-rbac-proxy/0.log" Jan 29 09:47:49 crc kubenswrapper[5031]: I0129 09:47:49.904377 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:47:49 crc kubenswrapper[5031]: I0129 09:47:49.906627 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2ls2g_d0fae1e4-5509-482f-9430-17a7148dc235/controller/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.146557 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.182381 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.189288 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.201323 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.399961 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.402240 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.440286 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.444825 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.602978 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-reloader/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.633670 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-metrics/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.661884 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/cp-frr-files/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.685302 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/controller/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.823745 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/frr-metrics/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.850391 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/kube-rbac-proxy/0.log" Jan 29 09:47:50 crc kubenswrapper[5031]: I0129 09:47:50.882554 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/kube-rbac-proxy-frr/0.log" Jan 29 09:47:51 crc kubenswrapper[5031]: I0129 09:47:51.022656 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/reloader/0.log" Jan 29 09:47:51 crc kubenswrapper[5031]: I0129 09:47:51.134131 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7pdgn_4fef4c25-5a46-45ba-bc17-fe5696028ac9/frr-k8s-webhook-server/0.log" Jan 29 09:47:51 crc kubenswrapper[5031]: I0129 09:47:51.304995 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7777f7948d-dxh4l_417f7fc8-934e-415e-89cc-fb09ba21917e/manager/0.log" Jan 29 09:47:51 crc kubenswrapper[5031]: I0129 09:47:51.438898 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7d7d76dfc-zj8mx_729c722e-e67a-4ff6-a4cf-0f6a68fd2c66/webhook-server/0.log" Jan 29 09:47:51 crc kubenswrapper[5031]: I0129 09:47:51.607158 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dsws8_28efe09e-8a3b-4a66-8818-18a1bc11b34d/kube-rbac-proxy/0.log" Jan 29 09:47:52 crc kubenswrapper[5031]: I0129 09:47:52.112066 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dsws8_28efe09e-8a3b-4a66-8818-18a1bc11b34d/speaker/0.log" Jan 29 09:47:52 crc kubenswrapper[5031]: I0129 09:47:52.320177 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-99ftr_93ad0b89-0d88-4e18-9f8d-4071a5847f1a/frr/0.log" Jan 29 09:48:04 crc kubenswrapper[5031]: I0129 09:48:04.796549 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/util/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.022739 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/pull/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.038171 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/pull/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.068001 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/util/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.191475 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/util/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.215276 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/extract/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.378834 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/util/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.524117 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/util/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.570645 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/pull/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.570935 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/pull/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.758681 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/extract/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.821721 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/util/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.853584 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138p665_d15df353-3a05-45aa-8c9f-ba06ba2595d5/pull/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.927538 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxvjs4_1f48659c-8c60-4f11-b68f-596ddf2d1b73/pull/0.log" Jan 29 09:48:05 crc kubenswrapper[5031]: I0129 09:48:05.990997 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-utilities/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.205353 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-content/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.231558 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-utilities/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.258496 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-content/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.405598 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-utilities/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.439091 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/extract-content/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.679727 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjfxm_d80684b2-6d0e-4e75-a152-8b727d137289/registry-server/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.719868 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-utilities/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.850935 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-content/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.866316 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-utilities/0.log" Jan 29 09:48:06 crc kubenswrapper[5031]: I0129 09:48:06.918149 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-content/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.108897 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-content/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.119153 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/extract-utilities/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.392981 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qjfs_75a63559-30d6-47bc-9f30-5385de9826f0/marketplace-operator/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.518048 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-utilities/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.747956 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-utilities/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.749177 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cr7rh_cb02be63-04db-40b0-9f74-892cec88b048/registry-server/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.766550 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-content/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.775733 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-content/0.log" Jan 29 09:48:07 crc kubenswrapper[5031]: I0129 09:48:07.989349 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-content/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.050900 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/extract-utilities/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.152731 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4vlv_2928c877-fb1d-41fa-9324-13efccbca747/registry-server/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.213884 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-utilities/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.379871 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-content/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.381318 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-utilities/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.392846 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-content/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.577234 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-utilities/0.log" Jan 29 09:48:08 crc kubenswrapper[5031]: I0129 09:48:08.583793 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/extract-content/0.log" Jan 29 09:48:09 crc kubenswrapper[5031]: I0129 09:48:09.542418 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kr6tb_73a47626-7d91-4369-a5f0-75aba46b4f34/registry-server/0.log" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.311984 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4bcrl"] Jan 29 09:48:24 crc kubenswrapper[5031]: E0129 09:48:24.312911 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="registry-server" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.312924 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="registry-server" Jan 29 09:48:24 crc kubenswrapper[5031]: E0129 09:48:24.312955 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="extract-content" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.312963 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="extract-content" Jan 29 09:48:24 crc kubenswrapper[5031]: E0129 09:48:24.312984 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="extract-utilities" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.312990 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="extract-utilities" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.313163 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf93a46-21ba-4fc3-9154-44ddb6f8f864" containerName="registry-server" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.314570 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.319756 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4bcrl"] Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.336118 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bcgh\" (UniqueName: \"kubernetes.io/projected/6843c820-3ad2-4586-b39e-6ed9f63c7079-kube-api-access-2bcgh\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.336302 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-catalog-content\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.336395 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-utilities\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.438295 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-utilities\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.438670 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bcgh\" (UniqueName: \"kubernetes.io/projected/6843c820-3ad2-4586-b39e-6ed9f63c7079-kube-api-access-2bcgh\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.438839 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-utilities\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.439398 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-catalog-content\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.439764 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-catalog-content\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.459747 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bcgh\" (UniqueName: \"kubernetes.io/projected/6843c820-3ad2-4586-b39e-6ed9f63c7079-kube-api-access-2bcgh\") pod \"community-operators-4bcrl\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:24 crc kubenswrapper[5031]: I0129 09:48:24.655417 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:25 crc kubenswrapper[5031]: I0129 09:48:25.200936 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4bcrl"] Jan 29 09:48:25 crc kubenswrapper[5031]: I0129 09:48:25.481410 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bcrl" event={"ID":"6843c820-3ad2-4586-b39e-6ed9f63c7079","Type":"ContainerStarted","Data":"bacde70a592c99ac96f92149b3b4a8cd6a10b7575ca54dbfe6ab7a7667c59bc5"} Jan 29 09:48:26 crc kubenswrapper[5031]: I0129 09:48:26.493851 5031 generic.go:334] "Generic (PLEG): container finished" podID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerID="485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243" exitCode=0 Jan 29 09:48:26 crc kubenswrapper[5031]: I0129 09:48:26.494125 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bcrl" event={"ID":"6843c820-3ad2-4586-b39e-6ed9f63c7079","Type":"ContainerDied","Data":"485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243"} Jan 29 09:48:28 crc kubenswrapper[5031]: I0129 09:48:28.520064 5031 generic.go:334] "Generic (PLEG): container finished" podID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerID="50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749" exitCode=0 Jan 29 09:48:28 crc kubenswrapper[5031]: I0129 09:48:28.520276 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bcrl" event={"ID":"6843c820-3ad2-4586-b39e-6ed9f63c7079","Type":"ContainerDied","Data":"50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749"} Jan 29 09:48:31 crc kubenswrapper[5031]: I0129 09:48:31.548637 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bcrl" event={"ID":"6843c820-3ad2-4586-b39e-6ed9f63c7079","Type":"ContainerStarted","Data":"68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7"} Jan 29 09:48:31 crc kubenswrapper[5031]: I0129 09:48:31.571119 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4bcrl" podStartSLOduration=3.673710523 podStartE2EDuration="7.571104464s" podCreationTimestamp="2026-01-29 09:48:24 +0000 UTC" firstStartedPulling="2026-01-29 09:48:26.496072533 +0000 UTC m=+4186.995660485" lastFinishedPulling="2026-01-29 09:48:30.393466474 +0000 UTC m=+4190.893054426" observedRunningTime="2026-01-29 09:48:31.567397785 +0000 UTC m=+4192.066985737" watchObservedRunningTime="2026-01-29 09:48:31.571104464 +0000 UTC m=+4192.070692416" Jan 29 09:48:34 crc kubenswrapper[5031]: I0129 09:48:34.656682 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:34 crc kubenswrapper[5031]: I0129 09:48:34.657334 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:34 crc kubenswrapper[5031]: I0129 09:48:34.737949 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:36 crc kubenswrapper[5031]: I0129 09:48:36.219335 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:36 crc kubenswrapper[5031]: I0129 09:48:36.271358 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4bcrl"] Jan 29 09:48:37 crc kubenswrapper[5031]: I0129 09:48:37.593568 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4bcrl" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="registry-server" containerID="cri-o://68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7" gracePeriod=2 Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.115593 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.219091 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-utilities\") pod \"6843c820-3ad2-4586-b39e-6ed9f63c7079\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.219165 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-catalog-content\") pod \"6843c820-3ad2-4586-b39e-6ed9f63c7079\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.219266 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bcgh\" (UniqueName: \"kubernetes.io/projected/6843c820-3ad2-4586-b39e-6ed9f63c7079-kube-api-access-2bcgh\") pod \"6843c820-3ad2-4586-b39e-6ed9f63c7079\" (UID: \"6843c820-3ad2-4586-b39e-6ed9f63c7079\") " Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.221803 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-utilities" (OuterVolumeSpecName: "utilities") pod "6843c820-3ad2-4586-b39e-6ed9f63c7079" (UID: "6843c820-3ad2-4586-b39e-6ed9f63c7079"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.247599 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6843c820-3ad2-4586-b39e-6ed9f63c7079-kube-api-access-2bcgh" (OuterVolumeSpecName: "kube-api-access-2bcgh") pod "6843c820-3ad2-4586-b39e-6ed9f63c7079" (UID: "6843c820-3ad2-4586-b39e-6ed9f63c7079"). InnerVolumeSpecName "kube-api-access-2bcgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.298862 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6843c820-3ad2-4586-b39e-6ed9f63c7079" (UID: "6843c820-3ad2-4586-b39e-6ed9f63c7079"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.321612 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.321642 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6843c820-3ad2-4586-b39e-6ed9f63c7079-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.321654 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bcgh\" (UniqueName: \"kubernetes.io/projected/6843c820-3ad2-4586-b39e-6ed9f63c7079-kube-api-access-2bcgh\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.605328 5031 generic.go:334] "Generic (PLEG): container finished" podID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerID="68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7" exitCode=0 Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.605391 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bcrl" event={"ID":"6843c820-3ad2-4586-b39e-6ed9f63c7079","Type":"ContainerDied","Data":"68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7"} Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.605427 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bcrl" event={"ID":"6843c820-3ad2-4586-b39e-6ed9f63c7079","Type":"ContainerDied","Data":"bacde70a592c99ac96f92149b3b4a8cd6a10b7575ca54dbfe6ab7a7667c59bc5"} Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.605428 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bcrl" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.605451 5031 scope.go:117] "RemoveContainer" containerID="68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.632503 5031 scope.go:117] "RemoveContainer" containerID="50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.648531 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4bcrl"] Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.655978 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4bcrl"] Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.665462 5031 scope.go:117] "RemoveContainer" containerID="485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.707821 5031 scope.go:117] "RemoveContainer" containerID="68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7" Jan 29 09:48:38 crc kubenswrapper[5031]: E0129 09:48:38.708199 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7\": container with ID starting with 68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7 not found: ID does not exist" containerID="68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.708236 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7"} err="failed to get container status \"68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7\": rpc error: code = NotFound desc = could not find container \"68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7\": container with ID starting with 68a47ec5f406041cb9726f593ea79158757898173025dd0081b71f19041ebef7 not found: ID does not exist" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.708257 5031 scope.go:117] "RemoveContainer" containerID="50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749" Jan 29 09:48:38 crc kubenswrapper[5031]: E0129 09:48:38.708762 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749\": container with ID starting with 50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749 not found: ID does not exist" containerID="50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.708784 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749"} err="failed to get container status \"50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749\": rpc error: code = NotFound desc = could not find container \"50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749\": container with ID starting with 50cdab387e588e176dd6bde2cb90ad75a7ac694bc6e791d39ed2afcd9b9e9749 not found: ID does not exist" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.708797 5031 scope.go:117] "RemoveContainer" containerID="485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243" Jan 29 09:48:38 crc kubenswrapper[5031]: E0129 09:48:38.709049 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243\": container with ID starting with 485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243 not found: ID does not exist" containerID="485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243" Jan 29 09:48:38 crc kubenswrapper[5031]: I0129 09:48:38.709073 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243"} err="failed to get container status \"485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243\": rpc error: code = NotFound desc = could not find container \"485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243\": container with ID starting with 485544bc1ba4777bab15e132fb5f4dcec1aa4056bbe03c9d9e615af1226dc243 not found: ID does not exist" Jan 29 09:48:40 crc kubenswrapper[5031]: I0129 09:48:40.308165 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" path="/var/lib/kubelet/pods/6843c820-3ad2-4586-b39e-6ed9f63c7079/volumes" Jan 29 09:48:44 crc kubenswrapper[5031]: E0129 09:48:44.097772 5031 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.153:36306->38.129.56.153:38585: write tcp 38.129.56.153:36306->38.129.56.153:38585: write: broken pipe Jan 29 09:49:38 crc kubenswrapper[5031]: I0129 09:49:38.493656 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:49:38 crc kubenswrapper[5031]: I0129 09:49:38.494202 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:49:59 crc kubenswrapper[5031]: I0129 09:49:59.394002 5031 generic.go:334] "Generic (PLEG): container finished" podID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerID="bff30e6ab4ffecde26e3426329ae52528becd093b17e1c62a935e2cbb389b346" exitCode=0 Jan 29 09:49:59 crc kubenswrapper[5031]: I0129 09:49:59.394100 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76trw/must-gather-v6pbp" event={"ID":"a5df2e74-662a-4b66-9ccc-93c1eac717b8","Type":"ContainerDied","Data":"bff30e6ab4ffecde26e3426329ae52528becd093b17e1c62a935e2cbb389b346"} Jan 29 09:49:59 crc kubenswrapper[5031]: I0129 09:49:59.396007 5031 scope.go:117] "RemoveContainer" containerID="bff30e6ab4ffecde26e3426329ae52528becd093b17e1c62a935e2cbb389b346" Jan 29 09:50:00 crc kubenswrapper[5031]: I0129 09:50:00.099341 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76trw_must-gather-v6pbp_a5df2e74-662a-4b66-9ccc-93c1eac717b8/gather/0.log" Jan 29 09:50:08 crc kubenswrapper[5031]: I0129 09:50:08.493573 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:50:08 crc kubenswrapper[5031]: I0129 09:50:08.494405 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.310680 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76trw/must-gather-v6pbp"] Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.311345 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-76trw/must-gather-v6pbp" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerName="copy" containerID="cri-o://565b4dff7315d5d2bfafc1898bd02658ee2d8af909f624c1542d988560ca7d8e" gracePeriod=2 Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.322968 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76trw/must-gather-v6pbp"] Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.532635 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76trw_must-gather-v6pbp_a5df2e74-662a-4b66-9ccc-93c1eac717b8/copy/0.log" Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.533490 5031 generic.go:334] "Generic (PLEG): container finished" podID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerID="565b4dff7315d5d2bfafc1898bd02658ee2d8af909f624c1542d988560ca7d8e" exitCode=143 Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.730422 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76trw_must-gather-v6pbp_a5df2e74-662a-4b66-9ccc-93c1eac717b8/copy/0.log" Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.730935 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.930587 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5df2e74-662a-4b66-9ccc-93c1eac717b8-must-gather-output\") pod \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.930736 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlkpp\" (UniqueName: \"kubernetes.io/projected/a5df2e74-662a-4b66-9ccc-93c1eac717b8-kube-api-access-qlkpp\") pod \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\" (UID: \"a5df2e74-662a-4b66-9ccc-93c1eac717b8\") " Jan 29 09:50:11 crc kubenswrapper[5031]: I0129 09:50:11.943701 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5df2e74-662a-4b66-9ccc-93c1eac717b8-kube-api-access-qlkpp" (OuterVolumeSpecName: "kube-api-access-qlkpp") pod "a5df2e74-662a-4b66-9ccc-93c1eac717b8" (UID: "a5df2e74-662a-4b66-9ccc-93c1eac717b8"). InnerVolumeSpecName "kube-api-access-qlkpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.034386 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlkpp\" (UniqueName: \"kubernetes.io/projected/a5df2e74-662a-4b66-9ccc-93c1eac717b8-kube-api-access-qlkpp\") on node \"crc\" DevicePath \"\"" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.109401 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5df2e74-662a-4b66-9ccc-93c1eac717b8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a5df2e74-662a-4b66-9ccc-93c1eac717b8" (UID: "a5df2e74-662a-4b66-9ccc-93c1eac717b8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.136691 5031 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5df2e74-662a-4b66-9ccc-93c1eac717b8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.302193 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" path="/var/lib/kubelet/pods/a5df2e74-662a-4b66-9ccc-93c1eac717b8/volumes" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.545812 5031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76trw_must-gather-v6pbp_a5df2e74-662a-4b66-9ccc-93c1eac717b8/copy/0.log" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.546194 5031 scope.go:117] "RemoveContainer" containerID="565b4dff7315d5d2bfafc1898bd02658ee2d8af909f624c1542d988560ca7d8e" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.546326 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76trw/must-gather-v6pbp" Jan 29 09:50:12 crc kubenswrapper[5031]: I0129 09:50:12.568735 5031 scope.go:117] "RemoveContainer" containerID="bff30e6ab4ffecde26e3426329ae52528becd093b17e1c62a935e2cbb389b346" Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.494137 5031 patch_prober.go:28] interesting pod/machine-config-daemon-l6hrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.494695 5031 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.494743 5031 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.495510 5031 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050"} pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.495565 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" containerName="machine-config-daemon" containerID="cri-o://fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" gracePeriod=600 Jan 29 09:50:38 crc kubenswrapper[5031]: E0129 09:50:38.618579 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.797118 5031 generic.go:334] "Generic (PLEG): container finished" podID="458f6239-f61f-4283-b420-460b3fe9cf09" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" exitCode=0 Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.797248 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" event={"ID":"458f6239-f61f-4283-b420-460b3fe9cf09","Type":"ContainerDied","Data":"fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050"} Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.797636 5031 scope.go:117] "RemoveContainer" containerID="25d3c4dfc92bf39011e601e057af1e68b30d01be5281c5cf5375ff05644ea177" Jan 29 09:50:38 crc kubenswrapper[5031]: I0129 09:50:38.798773 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:50:38 crc kubenswrapper[5031]: E0129 09:50:38.799546 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.962349 5031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttdp"] Jan 29 09:50:43 crc kubenswrapper[5031]: E0129 09:50:43.980038 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="extract-content" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.980087 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="extract-content" Jan 29 09:50:43 crc kubenswrapper[5031]: E0129 09:50:43.980120 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerName="copy" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.980128 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerName="copy" Jan 29 09:50:43 crc kubenswrapper[5031]: E0129 09:50:43.980163 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="registry-server" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.980170 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="registry-server" Jan 29 09:50:43 crc kubenswrapper[5031]: E0129 09:50:43.980200 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerName="gather" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.980208 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerName="gather" Jan 29 09:50:43 crc kubenswrapper[5031]: E0129 09:50:43.980216 5031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="extract-utilities" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.980270 5031 state_mem.go:107] "Deleted CPUSet assignment" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="extract-utilities" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.992320 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerName="gather" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.992430 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5df2e74-662a-4b66-9ccc-93c1eac717b8" containerName="copy" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.992498 5031 memory_manager.go:354] "RemoveStaleState removing state" podUID="6843c820-3ad2-4586-b39e-6ed9f63c7079" containerName="registry-server" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.995508 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:43 crc kubenswrapper[5031]: I0129 09:50:43.996065 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttdp"] Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.191750 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-catalog-content\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.191797 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7f9\" (UniqueName: \"kubernetes.io/projected/07e2a6a3-bb86-4599-a883-e725faac7d14-kube-api-access-jj7f9\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.191917 5031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-utilities\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.293337 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-utilities\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.293814 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-utilities\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.293945 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-catalog-content\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.294064 5031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7f9\" (UniqueName: \"kubernetes.io/projected/07e2a6a3-bb86-4599-a883-e725faac7d14-kube-api-access-jj7f9\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.294209 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-catalog-content\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.320448 5031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7f9\" (UniqueName: \"kubernetes.io/projected/07e2a6a3-bb86-4599-a883-e725faac7d14-kube-api-access-jj7f9\") pod \"redhat-marketplace-9ttdp\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.327332 5031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.824616 5031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttdp"] Jan 29 09:50:44 crc kubenswrapper[5031]: I0129 09:50:44.858611 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttdp" event={"ID":"07e2a6a3-bb86-4599-a883-e725faac7d14","Type":"ContainerStarted","Data":"a8b38feef7b1b57dddd253fd34f54c8a31d6bbcc1205fb3113423cc83802ba70"} Jan 29 09:50:45 crc kubenswrapper[5031]: I0129 09:50:45.876460 5031 generic.go:334] "Generic (PLEG): container finished" podID="07e2a6a3-bb86-4599-a883-e725faac7d14" containerID="e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78" exitCode=0 Jan 29 09:50:45 crc kubenswrapper[5031]: I0129 09:50:45.876564 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttdp" event={"ID":"07e2a6a3-bb86-4599-a883-e725faac7d14","Type":"ContainerDied","Data":"e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78"} Jan 29 09:50:45 crc kubenswrapper[5031]: I0129 09:50:45.879594 5031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:50:46 crc kubenswrapper[5031]: I0129 09:50:46.887315 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttdp" event={"ID":"07e2a6a3-bb86-4599-a883-e725faac7d14","Type":"ContainerStarted","Data":"e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab"} Jan 29 09:50:47 crc kubenswrapper[5031]: I0129 09:50:47.900697 5031 generic.go:334] "Generic (PLEG): container finished" podID="07e2a6a3-bb86-4599-a883-e725faac7d14" containerID="e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab" exitCode=0 Jan 29 09:50:47 crc kubenswrapper[5031]: I0129 09:50:47.900779 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttdp" event={"ID":"07e2a6a3-bb86-4599-a883-e725faac7d14","Type":"ContainerDied","Data":"e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab"} Jan 29 09:50:49 crc kubenswrapper[5031]: I0129 09:50:49.919843 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttdp" event={"ID":"07e2a6a3-bb86-4599-a883-e725faac7d14","Type":"ContainerStarted","Data":"e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82"} Jan 29 09:50:49 crc kubenswrapper[5031]: I0129 09:50:49.945611 5031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ttdp" podStartSLOduration=4.48225707 podStartE2EDuration="6.945590172s" podCreationTimestamp="2026-01-29 09:50:43 +0000 UTC" firstStartedPulling="2026-01-29 09:50:45.879293902 +0000 UTC m=+4326.378881854" lastFinishedPulling="2026-01-29 09:50:48.342626994 +0000 UTC m=+4328.842214956" observedRunningTime="2026-01-29 09:50:49.936836808 +0000 UTC m=+4330.436424760" watchObservedRunningTime="2026-01-29 09:50:49.945590172 +0000 UTC m=+4330.445178124" Jan 29 09:50:50 crc kubenswrapper[5031]: I0129 09:50:50.291070 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:50:50 crc kubenswrapper[5031]: E0129 09:50:50.292750 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:50:54 crc kubenswrapper[5031]: I0129 09:50:54.328015 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:54 crc kubenswrapper[5031]: I0129 09:50:54.328580 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:54 crc kubenswrapper[5031]: I0129 09:50:54.372253 5031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:55 crc kubenswrapper[5031]: I0129 09:50:55.037096 5031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:55 crc kubenswrapper[5031]: I0129 09:50:55.091046 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttdp"] Jan 29 09:50:57 crc kubenswrapper[5031]: I0129 09:50:57.000114 5031 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9ttdp" podUID="07e2a6a3-bb86-4599-a883-e725faac7d14" containerName="registry-server" containerID="cri-o://e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82" gracePeriod=2 Jan 29 09:50:57 crc kubenswrapper[5031]: I0129 09:50:57.981136 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.031275 5031 generic.go:334] "Generic (PLEG): container finished" podID="07e2a6a3-bb86-4599-a883-e725faac7d14" containerID="e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82" exitCode=0 Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.031338 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttdp" event={"ID":"07e2a6a3-bb86-4599-a883-e725faac7d14","Type":"ContainerDied","Data":"e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82"} Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.031420 5031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttdp" event={"ID":"07e2a6a3-bb86-4599-a883-e725faac7d14","Type":"ContainerDied","Data":"a8b38feef7b1b57dddd253fd34f54c8a31d6bbcc1205fb3113423cc83802ba70"} Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.031478 5031 scope.go:117] "RemoveContainer" containerID="e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.031492 5031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttdp" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.053837 5031 scope.go:117] "RemoveContainer" containerID="e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.072679 5031 scope.go:117] "RemoveContainer" containerID="e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.073262 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-catalog-content\") pod \"07e2a6a3-bb86-4599-a883-e725faac7d14\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.073433 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj7f9\" (UniqueName: \"kubernetes.io/projected/07e2a6a3-bb86-4599-a883-e725faac7d14-kube-api-access-jj7f9\") pod \"07e2a6a3-bb86-4599-a883-e725faac7d14\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.073647 5031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-utilities\") pod \"07e2a6a3-bb86-4599-a883-e725faac7d14\" (UID: \"07e2a6a3-bb86-4599-a883-e725faac7d14\") " Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.074958 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-utilities" (OuterVolumeSpecName: "utilities") pod "07e2a6a3-bb86-4599-a883-e725faac7d14" (UID: "07e2a6a3-bb86-4599-a883-e725faac7d14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.079329 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e2a6a3-bb86-4599-a883-e725faac7d14-kube-api-access-jj7f9" (OuterVolumeSpecName: "kube-api-access-jj7f9") pod "07e2a6a3-bb86-4599-a883-e725faac7d14" (UID: "07e2a6a3-bb86-4599-a883-e725faac7d14"). InnerVolumeSpecName "kube-api-access-jj7f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.099027 5031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07e2a6a3-bb86-4599-a883-e725faac7d14" (UID: "07e2a6a3-bb86-4599-a883-e725faac7d14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.160381 5031 scope.go:117] "RemoveContainer" containerID="e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82" Jan 29 09:50:58 crc kubenswrapper[5031]: E0129 09:50:58.164215 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82\": container with ID starting with e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82 not found: ID does not exist" containerID="e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.164273 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82"} err="failed to get container status \"e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82\": rpc error: code = NotFound desc = could not find container \"e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82\": container with ID starting with e02d674c03adcea16d4fe2f5dc547ead03e1ec2b91b44d44c9519e8043c30b82 not found: ID does not exist" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.164298 5031 scope.go:117] "RemoveContainer" containerID="e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab" Jan 29 09:50:58 crc kubenswrapper[5031]: E0129 09:50:58.165030 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab\": container with ID starting with e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab not found: ID does not exist" containerID="e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.165099 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab"} err="failed to get container status \"e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab\": rpc error: code = NotFound desc = could not find container \"e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab\": container with ID starting with e6876be1905ac03a2f1630f2823d31b74329cad87690e73f129149970c9ff4ab not found: ID does not exist" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.165141 5031 scope.go:117] "RemoveContainer" containerID="e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78" Jan 29 09:50:58 crc kubenswrapper[5031]: E0129 09:50:58.166035 5031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78\": container with ID starting with e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78 not found: ID does not exist" containerID="e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.166121 5031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78"} err="failed to get container status \"e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78\": rpc error: code = NotFound desc = could not find container \"e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78\": container with ID starting with e44a6227d5f229797cf96924c51306ed66586796dc107ea4754a83fdd3c92b78 not found: ID does not exist" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.176144 5031 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.176178 5031 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e2a6a3-bb86-4599-a883-e725faac7d14-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.176192 5031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj7f9\" (UniqueName: \"kubernetes.io/projected/07e2a6a3-bb86-4599-a883-e725faac7d14-kube-api-access-jj7f9\") on node \"crc\" DevicePath \"\"" Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.357929 5031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttdp"] Jan 29 09:50:58 crc kubenswrapper[5031]: I0129 09:50:58.366503 5031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttdp"] Jan 29 09:51:00 crc kubenswrapper[5031]: I0129 09:51:00.296261 5031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e2a6a3-bb86-4599-a883-e725faac7d14" path="/var/lib/kubelet/pods/07e2a6a3-bb86-4599-a883-e725faac7d14/volumes" Jan 29 09:51:04 crc kubenswrapper[5031]: I0129 09:51:04.282665 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:51:04 crc kubenswrapper[5031]: E0129 09:51:04.283393 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:51:11 crc kubenswrapper[5031]: I0129 09:51:11.294951 5031 scope.go:117] "RemoveContainer" containerID="aedb484bfdd0276047937b71a2765709b9db33f9ed681d0f49773772be660aab" Jan 29 09:51:11 crc kubenswrapper[5031]: I0129 09:51:11.320419 5031 scope.go:117] "RemoveContainer" containerID="0aeb8a1fed215721ddeb828a809a3d044631d57b6409d0c10d185ba8987b3010" Jan 29 09:51:15 crc kubenswrapper[5031]: I0129 09:51:15.282533 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:51:15 crc kubenswrapper[5031]: E0129 09:51:15.283170 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:51:26 crc kubenswrapper[5031]: I0129 09:51:26.287959 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:51:26 crc kubenswrapper[5031]: E0129 09:51:26.288759 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:51:37 crc kubenswrapper[5031]: I0129 09:51:37.282293 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:51:37 crc kubenswrapper[5031]: E0129 09:51:37.283337 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:51:49 crc kubenswrapper[5031]: I0129 09:51:49.282575 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:51:49 crc kubenswrapper[5031]: E0129 09:51:49.283309 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:52:03 crc kubenswrapper[5031]: I0129 09:52:03.283023 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:52:03 crc kubenswrapper[5031]: E0129 09:52:03.286329 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:52:14 crc kubenswrapper[5031]: I0129 09:52:14.283581 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:52:14 crc kubenswrapper[5031]: E0129 09:52:14.285430 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:52:29 crc kubenswrapper[5031]: I0129 09:52:29.283417 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:52:29 crc kubenswrapper[5031]: E0129 09:52:29.284961 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:52:43 crc kubenswrapper[5031]: I0129 09:52:43.377790 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:52:43 crc kubenswrapper[5031]: E0129 09:52:43.379060 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09" Jan 29 09:52:57 crc kubenswrapper[5031]: I0129 09:52:57.283439 5031 scope.go:117] "RemoveContainer" containerID="fda6901ea548ca7e460c389ee71fb2a29aa12e996103e7065178d69ff7cd8050" Jan 29 09:52:57 crc kubenswrapper[5031]: E0129 09:52:57.284761 5031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6hrn_openshift-machine-config-operator(458f6239-f61f-4283-b420-460b3fe9cf09)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6hrn" podUID="458f6239-f61f-4283-b420-460b3fe9cf09"